How Giant Swarm Enables a New Workflow

Sep 18, 2018

By now we all know that Amazon AWS changed computing forever and it actually started as an internal service. The reason for the existence of AWS is pretty easy to understand once you understand Jeff Bezos and Amazon. Sit tight.

Jeff and his team deeply believe in the two pizza team rule, meaning that you need to be able to feed a team with two pizzas or it is too big. This is due to the math behind communication, namely the fact that the number of communication links in a group can be calculated based on the members of the team n :

n * (n-1)
----------
    2

In a team of 10, there are 45 possible links, possible communication paths. At 20, there are 190, at 100 people there are 5.000. You get the idea. You need to allow a small team to be in full control and that is really where DevOps comes from: You build it, you run it and if you want to make your corporate overlord fully tremble in fear, you need a third point: you decide it.

The problem Amazon had though, was that their teams were losing a lot of time because they had to care for the servers running their applications and that part of the equation was just not integrated into their workflow yet. “Taking care of servers” was a totally separate thing than the rest of their work, where one (micro-)service simply talked to another service when needed. The answer, in the end, was simple. Make infrastructure code and give those teams APIs to control compute resources, creating an abstraction layer to the servers. There should not be a difference between talking to a service built by another team, the API for a message queue, charge a credit card or start a few servers.

This allows for a lot of efficiency on both sides and is great. Developers have a nice API and the Server Operations people can do whatever needs to be done as long as they keep the API stable.

Everything becomes part of the workflow. And once you have it internally as a service there was no reason to not make it public and hence have better utilization of your servers.


Kubernetes Appears on the Scene


Now think about how Kubernetes has started to gain traction within bigger companies. It actually normally starts out with a team somewhere that installs Kubernetes however they want, sometimes as a strategic DevOps decision. Of course, these teams would never think about buying their own servers and building their own datacenter, but as K8s is code, it is seen as being more on the developer side. This means you end up with a disparate set of K8s installations until the infrastructure team gets interested and wants to provide it centrally.

While the corporation might think that with providing a centralized K8s they are actually doing what Amazon did with providing the API to K8s, being API driven, but that is not what the Amazon way is. The Amazon way, the right way, is to provide an API to start a K8s cluster and abstract all other things, like security and storage provisioning, away as far as possible. For efficiency, you might want to provide a bigger production cluster at some point, but first and foremost, this is about development speed.

Kubernetes for Product Owners_Blog


Giant Swarm - Your Kubernetes Provisioning API


This is where the Giant Swarm Platform comes in, soon including more managed services around it. Be it in the cloud or on-premise, we offer you an API that allows teams to start as many of their own K8s clusters, in a specific and clear-cut version, as they see fit, integrating the provisioning of K8s right into their workflows. The infrastructure team, or cluster operations team as we tend to call them, makes sure that all security requirements are met, there is some tooling around the clusters like CI/CD, possibly supply versioned helm chart templates and so on. This is probably worth a totally separate post.

At the same time, Giant Swarm provides you with fully managed Kubernetes clusters, keeping them always up to date, with the latest security fixes and in-place upgrades, so you are not left with a myriad of different versions run by different teams in different locations. Giant Swarm clusters of one version always look the same. “Cloud Native projects at demand at scale in consistently high quality”, as one of our partners said.

Through Giant Swarm, customers can put their teams back into full control, going as far as allowing them to integrate Giant Swarm in their CI/CD pipeline and quickly launch and tear down test clusters on demand. They can give those teams the freedom they planned by letting them launch their own K8s clusters by themselves, not having to request it somewhere, while keeping full control of how these clusters are secured, versioned and managed, so that they know that applications can move easily through their entire K8s ecosystem, in different countries and locations.

Giant Swarm is the Amazon EC2 for Kubernetes in any location. API-driven Kubernetes, where the teams stay in control and can really live DevOps and API-driven as a mindset and way of doing things. Request your free trial of the Giant Swarm Infrastructure here.

You May Also Like

These Stories on Product

Sep 12, 2022
May 7, 2020
Aug 20, 2019