• Jul 6, 2016
This post is the third in a series of blog posts about basic Kubernetes concepts. In the first one I explained the concepts of Pods, Labels, and Replica Sets. In the second post we talked about Deployments. This post will elaborate the Services concept. In the forth we look at Secrets and ConfigMaps. In the fifth and final post we talk about Daemon Sets and Jobs.
By now you should have a basic understanding of some of the fundamental primitives of Kubernetes. However, there are some concepts missing, still. One of the very central ones solves, especially when working with microservices, sets an abstraction layer on top of pods (and other services) so you can communicate with them without having to track every single pod as it starts, dies, and gets rescheduled. It is a basic building block for service discovery in Kubernetes.
As mentioned in the first blog post in this series, pods are ephemeral and bound to get killed and started by replica sets (or replication controllers) dynamically. Thus, communicating with a pod or groups thereof calls for a concept that abstracts away the ephemeral pod.
This is where services come in. They are a basic concept that is especially useful when working with microservice architectures as they decouple individual services from each other. For example service A accessing service B doesn’t know what kind of and how many pods are actually doing the work in B. The actual pods and even their implementation could change completely.
Services work by defining a logical set of pods and a policy by which to access them. The selection of pods is based on label selectors (which we talked about in the first blog post). In case you select multiple pods, the service automatically takes care of load balancing and assigns them a single (virtual) service IP (which you can also set manually).
You can use the selector to choose a group of pods and define a
targetPort on which to access them. Further helping with abstraction, this
targetPort can also refer to the name of a port, which gives you more freedom to implement the actual pods behind the service. Even the port of each pod could be different as long as they carry the same name.
Additionally, services can abstract away other kinds of backends that are not Kubernetes pods. You can for example abstract away an external database cluster behind a service. This way you can for example use a simple local database for your development environment and a professionally managed database cluster in production without having to change the way that your other services talk to that database service.
You can use the same feature if some of your workloads are running outside of your Kubernetes cluster, i.e. either on another Kubernetes cluster (or namespace) or even completely outside of Kubernetes. Latter is especially interesting if you are just starting to migrate workloads.
You can discover and talk to services in your cluster either via environment variables or via DNS. Latter is a cluster addon, which comes OOTB in most Kubernetes installs (including Giant Swarm).
In case you want to use another type of service discovery and don’t want the load balancing and single service IP provided by the service, there’s an option to create a so-called “headless” services. You can use this if you are already using a service discovery or want to reduce coupling to the Kubernetes system.
Now that you know a bit more about the concept of services, you should read up on their usage in the official documentation. Then go ahead and build some deployments that talk to each other over services.
Tip: If you have a service and a deployment that go together, you should usually start the service first and then the deployment. You can even deploy them with a single command by including both manifests in a single YAML file. You only have to separate them by a line containing
---. Kubernetes will then create them one after the other.
These Stories on Tech
A Technical Product Owner explores Kubernetes Gateway API from a Giant Swarm perspective.
Giant Swarm’s managed microservices infrastructure enables enterprises to run agile, resilient, distributed systems at scale, while removing the tasks related to managing the complex underlying infrastructure.
GET IN TOUCH
CERTIFIED SERVICE PROVIDER
No Comments Yet
Let us know what you think