Testing Kubernetes automation tools with Gnomock

A recent Gnomock release v0.10.0 brought a new integration with it: k3s preset that allows integration testing of Kubernetes automation tools written in Go.

This new preset creates a docker container that runs a single node k3s (lightweight Kubernetes) cluster in it. It uses orlangure/k3s docker image that has many k3s versions available as tags. I forked an existing project and made a couple of changes to allow Gnomock to fetch kubeconfig easily over HTTP. Its setup time is roughly 30 seconds after the image is pulled, which would be unacceptably slow for unit-tests, but is OK for integration tests.

Below I’ll demonstrate how Gnomock and its lightweight Kubernetes preset can be used to write integration tests for a Kubernetes automation tool.

Picking the right project to work on

This preset was originally requested by a team member of Grafana Tanka project, making this tool an obvious candidate to get some integration testing going on. Unfortunately, its CLI framework was pretty new and lacking some critical features, like cobra’s SetArgs and ExecuteContext methods.

The second choice was an OpenFaaS community tool called Arkade. This project barely had tests in its repository, so it made sense to add a few. The only obstacle to writing good integration tests was a choice to print all the output directly to os.Stdout using fmt package, instead of using cobra’s OutOrStdout methods, that default to os.Stdout anyway. With this limitation there wasn’t a good way to test the tool’s output, but with Gnomock it was still possible to test its required side effects.

Defining testing requirements

For a tool that basically downloads and executes other utilities, the only requirement I could think of was to make sure Kubernetes cluster state matches our expectations after a particular command or a set of commands is executed. To implement these requirements, we need to create a clean Kubernetes cluster, run a set of commands against it, and using Kubernetes API confirm that the resources we expect to exist in fact exist in that cluster.

As a specific example, I chose the following test scenario:

On a machine that doesn’t have kubectl installed, we run get kubectl command followed by install openfaas, and make sure that the cluster has 7 new deployments with specific names in openfaas namespace (using default configuration in arkade).

Actual testing

I chose to put the entire test in cmd_test.go file. The following dependencies need to be imported:

import (
	"context"
	"fmt"
	"io/ioutil"
	"os"
	"testing"

	"github.com/alexellis/arkade/cmd"
	"github.com/orlangure/gnomock"
	"github.com/orlangure/gnomock/preset/k3s"
	"github.com/stretchr/testify/require"
	v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/client-go/kubernetes"
)

Temporary Kubernetes-in-Docker container setup

Creating a clean Kubernetes cluster for our tests was very easy:

func TestInstall(t *testing.T) {
	c, err := gnomock.Start(
		k3s.Preset(k3s.WithVersion("v1.19.3")),
		gnomock.WithContainerName("gnomock-k3s"),
	)
	require.NoError(t, err)

	defer func() {
		require.NoError(t, gnomock.Stop(c))
	}()
}

Using existing orlangure/k3s tags it would be easy to write tests against multiple supported Kubernetes versions.

Configuring kubectl to use our temporary container

func TestInstall(t *testing.T) {
    // ...

	cfgBytes, err := k3s.ConfigBytes(c)
	require.NoError(t, err)

	f, err := ioutil.TempFile("", "gnomock-kubeconfig-")
	require.NoError(t, err)

	defer func() {
		require.NoError(t, f.Close())
		require.NoError(t, os.Remove(f.Name()))
	}()

	_, err = f.Write(cfgBytes)
	require.NoError(t, err)

	require.NoError(t, os.Setenv("KUBECONFIG", f.Name()))
}

k3s.ConfigBytes(c) returns the contents of kubeconfig file configured to use our temporary cluster. This file is saved in some temporary directory, and its name is saved in KUBECONFIG environment variable, available only to our testing process and its children.

Setting up Kubernetes client

In order to verify our requirements, a Kubernetes client in Go needs to be configured to access the same temporary cluster:

func TestInstall(t *testing.T) {
    // ...

	cfg, err := k3s.Config(c)
	require.NoError(t, err)

	client, err := kubernetes.NewForConfig(cfg)
	require.NoError(t, err)
}

k3s.Config(c) returns a Kubernetes *rest.Config object that can be used directly to create a new client.

Testing download feature

To make sure that kubectl command is available, the following test is executed:

t.Run("install cli tools", func(t *testing.T) {
    command := cmd.MakeGet()
    command.SetArgs([]string{"kubectl"})
    require.NoError(t, command.Execute())

    home, err := os.UserHomeDir()
    require.NoError(t, err)
    path := os.Getenv("PATH")
    os.Setenv("PATH", fmt.Sprintf("%s:%s/.arkade/bin", path, home))
})

It continues to setup the dependencies before running the actual test, but here it also executes an actual command get kubectl and makes sure it completes correctly.

Testing install openfaas command

At this point, we already have a running and ready to use Kubernetes cluster inside a docker container, and all the tools and configuration required to access it. It’s time to add the actual test:

t.Run("openfaas", func(t *testing.T) {
    command := cmd.MakeInstall()
    command.SetArgs([]string{"openfaas"})
    require.NoError(t, command.Execute())

    deploys, err := client.AppsV1().Deployments("openfaas").List(ctx, v1.ListOptions{})
    require.NoError(t, err)
    require.Len(t, deploys.Items, 7)

    actualDeploys := make([]string, 0, 7)
    for _, deploy := range deploys.Items {
        actualDeploys = append(actualDeploys, deploy.Name)
    }
    expectedDeploys := []string{
        "alertmanager", "nats", "queue-worker", "basic-auth-plugin",
        "prometheus", "gateway", "faas-idler",
    }
    require.ElementsMatch(t, expectedDeploys, actualDeploys)
})

Here, we execute install openfaas command, get a list of deployments from our cluster, and compare them against a list of deployments that should be created.

Next steps

In this post I described a way to setup a temporary Kubernetes cluster inside a docker container and run integration tests of a single command against it. The next steps would be to expand install openfaas command tests to make sure that various combinations of arguments produce expected side effects, and to write tests for other supported applications.

This test can be found here.