Application Configuration Management with Tanka

Oct 12, 2020

Update: since publishing this blog post, the Helm integration with Tanka is now supported.

We're about halfway through this series on application configuration management in Kubernetes. The very fact that this is a multi-article series demonstrates the complexity of the problem that each of the highlighted tools is attempting to resolve. New tools and approaches come out of the woodwork on a regular basis, and in this article, we're going to take a look at a relative newcomer — Tanka.

The Tanka project has come from a company synonymous with the emergence and growth of cloud-native computing, Grafana Labs. And, just like the cloud-native observability tooling that Grafana provides, Tanka is also open-source software. So, if there are already a number of solutions to this problem of application configuration management, why did Grafana Labs feel the need to develop a new one? To answer that question, we need to dig into a bit of history.


{j,k}sonnet


An early attempt to get to grips with application configuration management involved an open-source collaboration between Heptio (subsequently acquired by VMware, Inc), Microsoft, Bitnami (also acquired by VMware, Inc.), and Box. The project was called ksonnet, and leaned heavily on the Jsonnet data templating language for creating, composing, manipulating, and managing Kubernetes YAML manifests.

As we discussed in the article about Kapitan, Jsonnet is a super-set of the JSON data exchange format and adds in a number of traditional programming features (e.g. conditionals, variables, functions). In turn, ksonnet built on Jsonnet's capabilities to provide a Kubernetes domain-specific experience. It implemented a number of different concepts that were pertinent to a workflow for application configuration management in Kubernetes. The project promised a lot and garnered a sizeable following in a relatively short space of time.

Unfortunately, after VMware's subsuming of Heptio, it was announced that the project would be archived, with 'lack of community resonance' cited as the reason. This left many adopters high and dry, including Grafana Labs who relied on ksonnet to manage the configuration that underpinned their Kubernetes application stack. Tanka was borne from this unforeseen event and attempts to provide as near a drop-in replacement for ksonnet as possible. In actual fact, the project has achieved more than that. Removing some of the complexity that may have contributed to the 'lack of resonance', it has also added some complementary features and plans to further develop the tool's features over time. Key to its future success is Jsonnet, and the Kubernetes abstractions built on top of it.

So, with all that said, how exactly does Tanka go about tackling application configuration management in Kubernetes?

Kubernetes_for_Platform_Teams


What is Tanka for?


Unlike Kapitan with its general-purpose domain approach to configuration, Tanka's raison d'être is linked directly to configuration in Kubernetes. Its sole purpose is to generate YAML manifests that can be consumed by the Kubernetes API but in a convenient and pragmatic way.

It seeks to promote composability and reuse through the importing of libraries of Jsonnet code that can subsequently be compiled to Kubernetes API resource definitions. In this way, projects can consume third-party Kubernetes configuration templates hosted in git repositories (most usually, GitHub). Whilst this doesn't amount to an attempt to provide a packaging and installation metaphor in the same vein as Helm, it implicitly assumes that application providers will package their applications as Jsonnet libraries. And then make them available for public consumption.

All this talk of Jsonnet might sound a little ethereal, so let's have a closer look at Tanka to see how it works.


Managing configuration


In the same way that Helm, Kustomize, and Kapitan do, the first thing to say is that Tanka works from a directory structure. It seems to be the modus operandi for Kubernetes configuration. In order to work with Tanka, the project provides a CLI called 'tk', and the 'tk init' command will create a directory structure similar to the following:

.
├── environments                        # e.g. default, dev, qa, prod, sfo2
│  └── default
│     ├── main.jsonnet                  # entrypoint for Jsonnet compiler
│     └── spec.json                     # Cluster config for environment
├── jsonnetfile.json                    # Source of truth for 3rd-party libs
├── jsonnetfile.lock.json
├── lib                                 # Location of project-specific libraries
│  └── k.libsonnet
└── vendor                              # Location of third-party libs
   ├── github.com
   │  ├── grafana
   │  │  └── jsonnet-libs
   │  │     └── ksonnet-util
   │  │        ├── jaeger.libsonnet
   │  │        ├── jsonnetfile.json
   │  │        └── kausal.libsonnet
   │  └── ksonnet
   │     └── ksonnet-lib
   │        └── ksonnet.beta.4
   │           ├── k.libsonnet
   │           └── k8s.libsonnet
   ├── ksonnet-util -> github.com/grafana/jsonnet-libs/ksonnet-util
   └── ksonnet.beta.4 -> github.com/ksonnet/ksonnet-lib/ksonnet.beta.4

The directory that contains the file 'jsonnetfile.json' is considered to be the root of the project in question, and the file acts as the source of truth for the vendored libraries that are imported from third party sources. Let's hold off discussing environments for a moment, and address libraries instead.


Libraries


Libraries have been mentioned a lot in this discussion, and they are clearly a pivotal component in the way Tanka generates configuration. Libraries consist of Jsonnet template code that can be imported into a project for inclusion in the 'compilation' of Kubernetes YAML manifests. If we wanted to create a Deployment for the cert-manager X509 certificate management operator, for example, we might import a library to allow us to do this.

<snip>
deployment:
    deployment.new(name='cert-manager', replicas=1,
        containers=[$.cert_manager_container],
        podLabels={app: 'controller',},
    ),
<snip>

To generate the Deployment resource for the cert-manager controller, we might define it using the Jsonnet snippet shown above. To make this ‘deployment’ function available in the Jsonnet code we’re defining, the library first needs to be installed from its remote location, and then imported into the source file:

(import “cert-manager/deployment.libsonnet”)

The Jsonnet code we end up writing to create the Deployment resource is very concise and frees us from defining copious amounts of YAML. The majority of the detail is hidden in the Jsonnet code that is defined in the imported library. Using this abstraction technique, we can create complex resources using a minimal amount of code.

The 'tk init' command automatically installs some essential libraries for working with resource configuration; 'k8s.libsonnet' and 'k.libsonnet'. The former is the original Jsonnet library from the defunct ksonnet project, that implements the complete Kubernetes API. The latter, also from the ksonnet project, builds on 'k8s.libsonnet' to provide a friendlier set of interfaces to work with. The libraries hosted in the archived GitHub repo for ksonnet are unmaintained at present, but Grafana Labs intends to pick up maintenance of the libraries in due course (work in progress here). They also provide some convenience enhancements on top of the ksonnet libraries in 'kausal.libsonnet'. It's hoped these will be integrated into the ksonnet library in time.

At present, management of libraries is 'outsourced' to an external tool called jsonnet-bundler, but there is a stated intent to package the same functionality into 'tk'.


Environments


One of the big challenges faced in managing application configuration in Kubernetes is the need to create YAML resource definitions that are marginally different to cater to similar, alternative environments. A QA cluster or namespace will be almost identical to a production equivalent, but will be marginally different in some essential aspects (e.g. ingress rules). This is the whole purpose behind the drive for a configuration templating solution, and Tanka handles this elegantly. Making use of Jsonnet's inherent patching features, resource definitions can be amended rather than replaced wholesale.

The 'tk' CLI allows for creating and manipulating environments, each of which can make use of the project's libraries, whilst also defining configuration specific to that environment. The 'main.jsonnet' file is the entry point for each environment's configuration, whilst the 'spec.json' file provides Tanka with the details of the target environment.


Workflow


In order to transform the source into a set of YAML manifests, Tanka will first process all of the Jsonnet components to produce a large JSON object. It starts with the 'main.jsonnet' file for the environment in question, imports any defined libraries, before executing the Jsonnet code to create the JSON output. The Kubernetes resources will be nested within this object, which Tanka then traverses in order to find and extract those resources. The extracted resources are converted from their JSON representation to YAML.

The following 'tk' sub-commands facilitate an application configuration management workflow with Tanka:


Sub-command Action
tk eval Converts an environment's Jsonnet to a JSON object, and dumps to STDOUT
tk show Converts an environment's Jsonnet to YAML resources, and dumps to STDOUT
tk diff Computes the diff between the cluster state and the processed Jsonnet as YAML
tk apply Applies the generated YAML resources to the cluster


Crucially, the 'diff' sub-command allows for checking what will change in the cluster if the generated YAML is subsequently applied to it. This diff is a server-side diff (where possible), which has the benefit of providing the differences after being processed by the admission controller. The 'diff' and 'apply' sub-commands invoke 'kubectl' to perform their tasks, which needs to reside in the user's path.


Conclusion


Tanka and Jsonnet, then, are a great mix. But, what about Tanka's chances of succeeding where ksonnet failed?

Tanka was only announced to the Kubernetes community in January 2020, and so it's very new when compared with the other solutions we've been discussing in this series. Consequently, it's still developing its community and project contributors, which ultimately determines its speed of development, direction, and ultimate adoption. As the self-appointed successor of the defunct ksonnet project, Tanka has a certain cachet about it and is already used to manage production-level configuration for a well-respected CNCF member - Grafana Labs.

Yes, at present it has some missing pieces which may put some potential adopters off. It has no means of integration with Helm, for example, although this is planned for the future. It might be argued that it doesn't need integration with Helm because it relies on third-party libraries in the Jsonnet style. But, I would argue that despite their shortcomings, Helm charts are still the de facto means for packaging cloud-native applications at present. In the short term at least, that means Tanka needs to work with other community projects to provide the best available choice for managing application configuration.

If you're an early adopter of Tanka, get in touch and let us know about your experiences with its use.

You May Also Like

These Stories on Tutorial

Oct 6, 2023
Jul 31, 2023
Apr 18, 2023