Integration testing of Go programs that use Kafka
Kafka message broker is a popular choice for Go programs that require high performance and great scalability. In this post I’m going to demonstrate an easy way to build integration tests for such applications. These tests won’t need any mocks: they use a real Kafka instance under the hood, giving the most confidence that everything works correctly.
I created a small program for demonstration purposes: it includes a working program written in Go that exposes a web API to submit new events and get up to date aggregated data on all submitted events.
Program overview
I encourage you to clone the project and look around before moving on. The layout is simple:
producer
andconsumer
packages implement actual features to produce and consume messages using Kafka message broker;reporter
package that implements naive event aggregation (it simply counts events per “account”);handler
package that exposes web API to submit new messages and get up to date usage statistics.
/event
endpoint accepts JSON events (with type
and account_id
fields).
The program counts events per account, per event type, and keeps the results in
memory. /stats
endpoint returns these results to the user. Pretty
straightforward!
Testing objectives
Instead of (or in addition to) writing unit tests for each package, we will write a single integration test that will execute a number of HTTP requests and confirm that the program works as expected from end user perspective.
We would like to confirm that given a combination of Producer
, Consumer
and
Reporter
instances, web API of our program returns correct event counts given
the events we ourselves submit.
The following code snippets are taken directly from the linked repository, from
main_test.go
.
Setting up Kafka container for tests
Our test will use gnomock for creating and setting up a temporary Kafka docker container:
$ go get https://github.com/orlangure/gnomock
import (
"github.com/orlangure/gnomock"
"github.com/orlangure/gnomock/preset/kafka"
)
We will also use testify
library:
$ go get github.com/stretchr/testify
Creating a Kafka container with clean state directly from Go code is very easy:
container, err := gnomock.Start(
kafka.Preset(kafka.WithTopics("events")),
gnomock.WithDebugMode(), gnomock.WithLogWriter(os.Stdout),
gnomock.WithContainerName("kafka"),
)
require.NoError(t, err)
defer func() {
require.NoError(t, gnomock.Stop(container))
}()
kafka.Preset()
creates a new Kafka Preset that gnomock
will later set up.
kafka.WithTopics("events")
lets gnomock
know that we will need a new topic
in the container: events
. This Kafka topic is later used in tests.
gnomock.WithDebugMode(), gnomock.WithLogWriter(os.Stdout)
are there for
easier test debugging experience, and can be removed once tests run smoothly.
Debug mode lets gnomock
print debug information about every step it makes
(download docker image, create a container, etc.) Custom log writer forwards
all Kafka container logs to os.Stdout
, which can be useful to debug internal
container failures.
Calling gnomock.Stop(container)
in the end allows to stop and remove kafka
container since it won’t be needed after the tests are complete. Sometimes you
might want to skip this call. For example, if you want to manually debug tests
failure.
Once this code completes, we have a container that is already configured, has the requested topics, and is ready to accept new connections, and produce/consume messages.
Setting up our program for testing
Our test uses actual, “production” code for tests. There is no need to create mock instances of any types, no need to inject custom, test-only configuration, or do any other things that won’t happen when the program actually runs.
As in case of regular usage, we need to set up a Producer
, Consumer
and
Reporter
instances, and use them to create an http.Handler
:
p := producer.New(container.Address(kafka.BrokerPort), "events")
c := consumer.New(container.Address(kafka.BrokerPort), "events")
r := reporter.New(ctx, c)
mux := handler.Mux(p, r)
Note how we use “events” topic which we previously asked gnomock
to create
(kafka.WithTopics("events")
). Since Kafka container that we use exposes
multiple ports, we need to specify which address to connect to. This is done
with container.Address(kafka.BrokerPort)
: we only need to know on which port
the container exposes the Broker.
Actual testing
At this point, we already have a Kafka container that we will use in our tests,
and we have an http.Handler
implementation that exposes /event
and /stats
endpoints. The following code is not different from any other tests, with or
without gnomock
and/or Kafka.
You are welcome to explore the code and run it locally😼
Running tests with coverage report
By default, Go tests executed with -cover
flag report coverage for the same
package that includes the test. Read more about this topic in my blog
post.
In the end, our test execution command will look like this:
# run tests with coverage report
$ go test -race -cover -v -coverpkg ./... -coverprofile coverage.out
# explore the coverage in browser
$ go tool cover -html coverage.out
As can be seen from the report, only the error handling code is not covered with tests, which sometimes is enough to at least confirm the “happy flow”.
Running integration tests in Github Actions
As a bonus, below is a workflow code that runs our integration tests with a real Kafka container in Github Actions:
name: Test
on:
push:
branches:
- master
pull_request:
jobs:
integration-test:
name: Integration test
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.15
uses: actions/setup-go@v1
with:
go-version: 1.15
- name: Check out code into the Go module directory
uses: actions/checkout@v1
- name: Get dependencies
run: go get -v -t -d ./...
- name: Test
run: go test -race -cover -coverpkg ./... -v
This workflow does not require any external configuration or scripts. Everything you need to run integration tests against an actual Kafka container is inside our Go tests.
Please note that integration tests are often slower that unit tests. The suggested approach uses docker extensively, it downloads pretty large images, creates fairly heavy containers and generally takes some time to complete. Remember it when adding a new CI/CD job as it can cost you more money than you expect.