Introduction to cost optimization in Kubernetes

Jul 1, 2021


The potential for cost saving is often one of the critical factors when deciding to move to open source and the cloud. Now, in the wake of 2020, this has been made especially evident as we have continued to witness a growth trajectory for cloud-native projects based on Kubernetes and running in the public cloud. 

In this series, we'll examine cost optimization in Kubernetes from a few perspectives, with this introductory post offering a high-level view of cost drivers in the cloud. In subsequent posts, we'll share how you can take control of these cost drivers. Finally, in the last installment, we'll provide an overview of some of the tools you can harness to control and decrease your costs. 


According to a survey run by Anderseen Horowitz in December 2020: about half of Kubernetes usage in enterprises is on the public cloud. And these numbers are trending up.

Figure 1 - Source: How tech stacks up in B2B, by Stacy D’Amico and Brad Kern 

The challenge

You’ve adopted Kubernetes, which is open source. ✔️

You are running workloads on the public cloud, where you pay only for what you use. ✔️

You have completed your journey towards cost-saving. Right? Wrong! ❌

The premise of this post is that cost optimization is a journey, not a destination.

What are the cost drivers that you should be considering? The short answer: there are many. They range from personnel to deployment systems. For information on other dimensions of Kubernetes costs, you can check out our 2020 webinar on The Cost of Kubernetes.

On this occasion, we'll be discussing things that you can do with Kubernetes on the cloud to optimize your resource usage, which translates into cost savings. We will then give you some tips on identifying cost drivers, tracking them and, using this information to charge back to teams. On the way, we'll highlight mechanisms you can use to optimize and reduce costs in general.

Cost drivers in the cloud

In order to address cost drivers in the cloud, we'll break them down into three categories:

  1. Compute — includes CPU and memory
  2. Traffic — includes ingress and egress
  3. Storage — accounts for type, size, and use

Let’s use an illustration to help us map out the challenges that we are facing.

In the illustration above, we can see how Kubernetes concepts map to the infrastructure. The main wrapper you will have is the virtual private cloud (VPC). It defines the boundaries of the infrastructure created on the cloud. In it, there are multiple availability zones (AZs), which are an additional wrapper for your resources. They contribute to the resilience of the setup. Within each AZ there are machines that run your containers. The setup also includes load balancers and different cloud provider services. 

All of these direct back to costs. They also offer the opportunity to take advantage of mechanisms in Kubernetes to control these costs. A simple summary of the relationship between the cost drivers, the cloud infrastructure, and the Kubernetes mechanisms is summarized in the table below.

Cost Driver

On cloud 

Kubernetes mechanisms


Machines / Node group




Load balancers




Block storage / file storage / object storage

Persistent volumes (claims)


In this post, we highlighted that only moving workloads to Kubernetes and the cloud is not the complete solution to cutting IT costs. We briefly introduced the challenges around cost savings in the cloud and the actual cost drivers. In our next post, we'll break down cost drivers from the compute point of view. We will also be offering some actionable suggestions for optimizing your compute costs.

You May Also Like

These Stories on Tech

Feb 1, 2024
Dec 15, 2022
Sep 14, 2022