Application Configuration Management with Kustomize

Jun 4, 2020

In this series of articles, we're taking an in-depth look at application configuration management in Kubernetes.

In the previous article, we explored the virtues of Helm, and this time around we'll turn our attention to Kustomize.

Before we lift the lid on Kustomize and see what it can do for us, let's take a moment to see where it came from. Kustomize was announced as an open-source project by Google in the middle of 2018 and was primarily inspired by the perceived lack of a credible declarative application management solution for Kubernetes. It shies away from the templating approach that Helm employs to render resources, and focuses on patching and overlaying existing configuration instead.


Kustomize — the rationale


Kustomize shares much of its inspiration with Helm; that is the quest to provide configuration for applications, but with the ability to customize the configuration to suit a particular purpose or environment. It seeks to provide this for both bespoke applications and those classed as common off-the-shelf (COTS) applications. 

Where it differs from Helm's approach is its insistence on the use of YAML for customization definitions rather than an esoteric templating language. The bet is that the DevOps folk working with Kubernetes will already be familiar with its API resources and the YAML syntax used to define them. In theory, at least, familiarity lends itself to pain-free adoption.

Kustomize can add a common field to numerous, different resources (for example, a label or annotation), modify the values of existing fields (for example, the number of replicas of a Deployment), and partially patch resources provided as a 'base' configuration. It can even generate ConfigMap and Secret resource configurations from their canonical source definitions, but more on this later.


How does Kustomize work?


Kustomize works by building customized resource definitions from a set of existing definitions, and new configuration defined in a kustomization.yaml file. 

Let's explain how this works.


Invoking Kustomize


Firstly, to invoke the build of customized configuration, two approaches are available.

1. A standalone Kustomize binary can be used along with its ‘build’ or ‘create’ sub-commands.

2. Or, more conveniently, as of Kubernetes v1.14, Kustomize can be invoked as an integral component of the Kubernetes native kubectl CLI. 

The kubectl CLI is used daily by administrators, as well as by CI/CD tooling, and so Kustomize functionality is readily available for everyone's use. However, do be aware, that at the time of writing this the embedded version of Kustomize lags significantly behind the standalone version. Until this lag issue is resolved, we’d recommend using the standalone Kustomize binary over the embedded version.

The embedding of Kustomize into kubectl was controversial to say the least. It behaves like a plugin, but bypasses the kubectl plugin mechanism, which would have made its availability optional. This provides Kustomize with a significantly lower barrier to adoption than, say, Helm or another competing configuration management solution. These often need to be installed before use. Given that Kustomize and Kubernetes originate from Google, it's not much of a stretch for some to infer that Kustomize is a preferred approach favored by those most influential in the community. Whether this is true or not is now immaterial, as Kustomize functionality is an accepted part of the kubectl binary.

When working against an existing Kubernetes cluster, to see what Kustomize will generate by way of customized resources, instead of using the kustomize command, the following can be used instead:

$ kubectl kustomize <directory>

The directory contains the kustomization.yaml file and other resource definitions, and the rendered content is sent to the STDOUT stream. If we needed to apply the customized application resource definitions to the cluster, the following command achieves this:

$ kubectl apply -k <directory>

That's the mechanism for generating customized resource definitions, now let's have a look at how the customizations are defined.


Defining resource customizations


The directory that is specified as part of command invocation, must contain a kustomization.yaml file. It's this file that informs Kustomize on how to render the resources. It will list the resources that will be the subject of customization, as well as any transformations and additions that constitute the customization.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- service.yaml
namespace: my-app-ns
commonLabels:
  app.kubernetes.io/name: my-app

In the example above, the Deployment and Service resources (defined in their namesake files) will be customized with the ‘my-app-ns’ namespace definition, along with the ‘app.kubernetes.io/name’ label and ‘my-app’ value. 

So, Kustomize allows us to customize base resource definitions, but how can we handle multiple customization scenarios without duplication? How can we subtly nuance customizations for development, staging and production environments, for example?

 

├── base
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── overlays
├── dev
│   ├── kustomization.yaml
│   └── patch.yaml
├── prod
│   ├── kustomization.yaml
│   └── patch.yaml
└── staging
         ├── kustomization.yaml
         └── patch.yaml

To achieve this, Kustomize works with a 'base' configuration which can be customized further with definitions placed in additional kustomization.yaml files. The content of the kustomization.yaml file in the directory ‘overlays/staging’ is listed below:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
commonLabels:
  environment: staging
bases:
- ../../base/
patchesStrategicMerge:
- patch.yaml

It references the original customizations in the ‘base’ directory and then applies further customization according to its content. In this case, it adds another common label, and a configuration patch defined in ‘patch.yaml’.

This technique of defining common configuration and overlays to support similar but different scenarios is very powerful. It's not dissimilar to what we can achieve with Helm templating. But, it's achieved with pure YAML and no need for defining complex parameterized templates. In addition, the original resource files remain intact, as defined by their author. This aids with working from dependent, upstream application configuration definitions.


Generating configuration


Kustomize's name is very apt given its purpose, and patching configuration using overlays is exactly what you'd expect it to do. You wouldn't expect Kustomize, perhaps, to also have the ability to generate Kubernetes API resources from scratch. Well, it can, but for a limited set of resources, and for a very sound reason.

Kustomize can generate ConfigMaps and Secrets, either from literal definitions or from a canonical source (environment or regular) file. Typically, ConfigMaps and Secrets are created imperatively (with ‘kubectl create’) for a workload, so by defining their generation in a kustomization.yaml file, Kustomize facilitates a more declarative approach.

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: my-app-config
>  files:
  - config.json
<snip>

Here, the ‘kustomization.yaml file defines a ConfigMap called ‘my-app-config’, based on the contents of the local file ‘config.json’. On issuing the ‘kubectl apply -k’ command, the ConfigMap will get created, and assuming its consumption is defined in a Pod template spec, the contents will subsequently be made available to the workload. 

But what happens if the configuration data or Secret gets updated? How do we get the workload to recognize the change? This is a common problem in Kubernetes, and whilst the new content can be made available through a volume mount, the application may not be cognizant of it. That is unless it's restarted.

Kustomize tries to account for this edge case, by creating ConfigMaps and Secrets with names that have suffixes appended. The suffix is a hash of the object content, and so the replacement object's name will change each time the content is changed. If the canonical data source is altered, a ‘kubectl apply -k’ will generate a new ConfigMap or Secret with a different name, and the Pod template spec will also get updated to reflect the new object name. And, thanks to the relevant controller reconciliation loop, it will also result in a workload update, which will cause a new workload to be created with access to the revised content.

This resource generation feature extends Kustomize beyond the pure realm of customizing resource definitions and helps to solve a problem that is difficult to crack with other, similar solutions.

blog-post-cta-1000x300-software-architects


Kustomize and Helm together


Kustomize works great with owner-authored, bespoke configuration resources but what happens if you need to consume a COTS application?

You don't own or maintain the base resource definitions. These applications are usually packaged as Helm charts rather than kustomization configurations, and contain templated definitions rather than pure YAML. 

Can Kustomize and Helm work together, or can Kustomize consume Helm charts in some way?

The short answer is yes. 

Helm can be forced to render resource definitions using the ‘helm template’ command and a suitable values.yaml file. The rendered content can be redirected to local files that Kustomize can then act on as base resource definitions. But, what happens if the COTS application evolves, and the changes are too important to be ignored?

Well, the Kustomize project encourages a fork/modify/rebase workflow, where upstream configuration is initially forked to a Git repository. The forked resources are rendered using ‘helm template’, modified according to the customizations defined, and applied using Kustomize. Changes made to the upstream resources can then be periodically synced through a ‘git rebase’.


Conclusion


Kustomize is a very credible alternative to Helm when it comes to application configuration management. It's also very popular judging by the number of stars, contributors, and activity it has on its GitHub repo.

But it has not become a new panacea for application configuration management in Kubernetes. Perhaps, it never promised to be.

Whilst it's a great solution for bespoke applications, there's still an implied dependency on the more popular Helm chart solution for COTS applications. It's not difficult to estimate the attractiveness of a tool that packages and distributes applications in the way Helm does — the success of Docker is a testament to this fact. Kustomize can certainly be a valuable ancillary to Helm, but it's unlikely to unseat Helm as the de facto tool for managing application configuration management. Ultimately, it may also lose some of its currency when Helm delivers on its promise to replace its templating with Lua scripting. Only time will tell.

You May Also Like

These Stories on Tech

Feb 1, 2024
Dec 15, 2022
Sep 14, 2022