• Apr 7, 2022
In this series of blog articles related to GitOps, we’ve focused on five new and exciting software tools. These tools are still actively under heavy development. But, the tools are really only part of the story. GitOps necessarily changes how DevOps teams work, too, so we need to be mindful of what we’re taking on. This concluding article discusses some of the considerations that it’s prudent to address before starting the journey.
Although the term ‘GitOps’ has been around since as long ago as 2017, it’s generally considered to be in its early stages of maturity. For example, the Flux project developed a solution (as of version 1) over three years or more, only to pivot to a complete rewrite with a lot of breaking changes. The rationale behind the rewrite was as much to do with improved tooling for building custom controllers for Kubernetes as anything. Still, inevitably there were some lessons learned concerning the value of certain features. The decision to go all out on a rewrite was a brave one, is entirely laudable, but also reflects the level of maturity of GitOps in general.
We might also point to the short-lived collaboration between the two leading open-source projects, ArgoCD and Flux. They were to join forces to share some of the core features associated with GitOps in the form of the GitOps Engine, but the Flux project belatedly decided to pursue its own path. This on-off nature of collaboration between projects only served to cause confusion and sow some seeds of doubt over the longevity of the tools that underpin the GitOps approach.
It’s also fair to say that different people have held different views concerning what the term ‘GitOps’ encompasses. And there are some well-respected people in the community who don’t necessarily believe that GitOps is particularly valuable as an approach at all. It’s taken until October 2021 for the various interested parties to coalesce around a published definition of the principles that govern a system managed using a GitOps approach. It’s good that it’s here now, but it does suggest that the discipline is still in its formative stages.
Of course, this doesn’t mean that there aren’t early adopters or that the approach is invalid in any way. It just means that organizations need to exercise care when developing their approach to the operational aspects of software and infrastructure deployment using GitOps.
It would be a mistake to think that deploying a few controllers to a Kubernetes cluster, and creating some custom resources that reference a Git repo, will bring you GitOps nirvana. Nothing could be further from reality! Implementing a GitOps approach is as much about the process and the teams who participate in the process as it is about the tools.
Firstly, an organization looking to implement a GitOps workflow for continuous deployment must already be successfully practicing robust automated software delivery, using continuous integration and testing. Without this, little, if any, benefit will be gained by automating deployments with a GitOps tool. At best, you’ll be introducing a final step that introduces considerable complexity to a largely manual process. And, at worst, you’ll be consistently and automatically deploying flawed software to a Kubernetes cluster. Therefore, it’s important to nail down the software delivery pipeline before extending to the extra step of continuous deployment using GitOps tools.
If you asked a bunch of DevOps personnel from different organizations how many repos they use for their apps and configuration and how changes are propagated from one environment to the next, you’d get many different answers. That’s because there is no right or wrong way to divvy up the code, config, and responsibilities. But there are a set of popular approaches, which all have their different pros and cons.
The choice of the best way to structure repos for a particular organization implementing GitOps practices often comes down to control and the management of change approval. On the one hand, it’s desirable to give developers as much freedom as possible; to improve velocity and allow operators to focus on the infrastructure. A self-service paradigm, if you will. At the same time, it’s important to establish some boundaries in terms of who can access which environment and what changes they’re able to make. For example, it may not be appropriate for developers to have access to a production namespace or cluster. Many factors will influence the decision concerning repo structure and process flow, not least the size and scale of an organization and its application portfolio.
Projects that introduce big changes to working practices frequently fail, especially in large organizations. As a result, projects that seek to introduce GitOps into existing workflows have the potential to flounder before the realization of benefits. This can be for several reasons.
GitOps is a young but exciting new direction for cloud-native adopters to embrace. There are a growing number of tools to choose from and a gradual convergence of opinion on what is involved in implementing GitOps workflows. The competition, expanding community, and emergence of useful patterns all suggest a growing interest and maturity.
Giant Swarm’s managed microservices infrastructure enables enterprises to run agile, resilient, distributed systems at scale, while removing the tasks related to managing the complex underlying infrastructure.
GET IN TOUCH
CERTIFIED SERVICE PROVIDER