FinTechs - watch out! How to make Kubernetes successful

Aug 20, 2019

FinTech is a relatively new, but rapidly expanding sector of the financial services industry. Put simply, FinTech is the application of technology to the provision of financial services and is increasingly playing a disruptive role in the way society consumes financial services. Innovation is the key ingredient in the rise of FinTech companies (FinTechs), and technology is the essence of the many new products and services that have found their way to market.

But what makes FinTechs any different to the established financial industry that we have come to love (or hate!) over the years? Surely, every organization uses technology to deliver services to its customers?


Continuous Innovation


Yes, technology is a core component of service delivery. But the real difference between FinTechs and their long-established counterparts is the speed at which they deliver new or enhanced services to the market. This rate of innovation is directly related to the enabling technology, culture, and workflows that are brought to bear in developing and delivering those services.

“Cloud-native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds.” Cloud-Native Computing Foundation

Many FinTechs have turned to cloud-native technology stacks to develop and deliver the software applications that define their innovative services. Cloud-native applications are often deployed in dynamic, elastic environments. They frequently follow an API-driven, containerized, microservices architecture. These architectures facilitate the fast pace of change that’s required to innovate. They also introduce a level of complexity that can’t be handled by manual operational oversight.

To handle the complexity inherent in running cloud-native applications, a fully-automated delivery platform is required. Containerized application services need orchestrating as they commence running. They also need a solution to scale up or down in response to demand, and deal with the occasional failure. Kubernetes has become the de-facto delivery platform that provides this flexibility for cloud-native applications. It provides an automated, scalable, and resilient host environment for containers. It has been adopted by organizations large and small, across a multitude of industries, including the rapidly expanding FinTech sector.


Reliable Delivery


Whether it’s a new way of banking, or the provision of cryptocurrency services, many people view FinTechs as a refreshing alternative to traditional financial service providers. But the services they deliver often change the norm for us, and potential service consumers are very wary of change. To win the confidence of a consumer market, FinTech services need to be delivered reliably and with a flawless user experience.

With a multitude of inherent reliability features, Kubernetes can provide that level of reliability for FinTech services. When Kubernetes clusters are built, configured, and managed by engineers that understand these features, service consumers benefit from secure, uninterrupted service delivery. Applications are distributed across the cluster’s nodes, which can be in different availability zones and regions, or even on different cloud provider infrastructures. Add to this the ability to spin up identical non-production clusters for development and test, and it’s clear that Kubernetes significantly aids reliable delivery of cloud-native applications.


Rapid Growth


FinTechs are often characterized by rapid growth. Starling Bank, for example, grew the number of their current accounts by 500% in just a 9-month period. To support such rapid growth, organizations need to get their technology platform right. Otherwise, technical problems will hinder growth. Particularly important is the removal of technology obstacles to the scalability of applications. It’s not unusual for cloud-native applications to make use of more than one programming language. Containers lend support to this approach as they are language and framework agnostic.

It’s not just the number of application components that need to scale in order to maintain adequate service levels. As demand for a FinTech service grows, some of the individual components need scaling up to service the demand. This in turn, results in the need for additional compute resources to host the scaled application components. To achieve this successfully in a complex, distributed environment requires automation. Once again, Kubernetes has this covered. It has the means to help organizations to cope with fluctuating demand, with its Horizontal Pod Autoscaler and Cluster Autoscaler features.


What’s the Catch?


This all seems very promising, but is it too good to be true? While cloud-native adoption can bring many excellent benefits to the endeavors of FinTechs, it should also come with a warning. Get it wrong, and you’ll pay with a lengthy and costly diversion, and maybe even your business. As an active participant in cloud-native technology communities, Giant Swarm gets to see and hear about some of the miscalculations that are made in the quest for cloud-native adoption. We’d like to share one story that characterizes some of the pitfalls that can beset the inexperienced. We share it in order to open people’s eyes to the realities of cloud-native adoption; it’s an evolving journey, rather than a sprint. While it is a real world scenario, we have changed names and specifics, in order to protect the identity of the FinTech in question.


Scenario


The scenario involves a successful FinTech company with an innovative financial solution. It was software implemented and deployed to a traditional VM-oriented environment on a public cloud provider’s infrastructure. The company wanted to expand their service offerings by adding new features and applications. Their goal was to improve their appeal to customers; existing and prospective.

With a mature delivery pipeline already in place on the existing platform and a talented engineering team in place, a decision was made to containerize their applications and deploy their workloads to a Kubernetes cluster. The goal was to rapidly expand their business into new industry segments with new software applications. Simultaneously they were seeking to acquire compliance to an industry security standard. The former would allow them to expeditiously address new markets. The latter would provide them with the credibility and integrity their customers demanded.

The rationale behind the decision to use containerized applications and Kubernetes as a delivery platform to achieve their goals, made perfect sense. Yet, the endeavor ended in failure. So, what went wrong?


The Pitfalls


The company knew that trying to migrate all of their applications in one go would be risky. They selected a non-critical application to containerize and subsequently deploy to Kubernetes. So far, so good.


Skills Acquisition


Popular technologies in the open source world can be very seductive. It seems easy to dive in without assessing what’s involved first. It’s relatively straightforward to spin up a Kubernetes cluster for test and development. However, building out a secure, stable Kubernetes platform for production at scale is not so trivial. While the team had exceptional technical skills, acquiring the specific skills for building and operating a Kubernetes cluster is a long process of failures. And it’s not just deep knowledge of Kubernetes itself that’s required. There is a whole universe of add-on components for enabling services that need to be reviewed, selected, installed, maintained, and supported.

Skills can be brought in, of course, but the engineers need to fully understand every aspect to successfully handle issues that inevitably turn up.


Conflicting Priorities


The unplanned delay to skills acquisition had a ripple effect. When coupled with the business’ urgency to achieve its goals, the team were pressured to start deployments as soon as the platform was functional. This meant that deployments started while the platform was still being built to production grade. On top of this there was still the need to continue to provide operational support for the existing platform. It’s easy to see how conflicting priorities started to erode the team’s effectiveness. It’s the perfect setting for the accrual of technical debt and costly mistakes.


Application Packaging


In a perfect world, we’d start with a greenfield project. In it we would re-engineer our applications using a microservices architecture. Life is not always like this. While it’s possible to package legacy applications into containers, this doesn’t always work out well. The initial legacy application the FinTech packaged into the container format was not suited to run in a complex distributed environment. It resulted in numerous application problems in production. This in itself increased the burden on the engineering team.


Outcome


Developers lost confidence in their ability to successfully deploy application workloads. The engineering team couldn’t rely on having a stable production environment. Eventually, the project was canned.

This real life example ended in failure. The decision to adopt a cloud-native approach was sound from a technical perspective. But, decisions of this nature must be considered from a business perspective too. Is the business confident of achieving the value it seeks? Can it quantify the investment it needs to make? Does it understand the risks involved?

In this scenario the business elected to go it alone, and to rely on its own capabilities to implement Kubernetes. Lack of skills and preparedness determined the outcome. Not everyone who chooses this route is destined for failure, but it does necessitate a substantial investment. Zalando, for example, required a team of 18 engineers to build out a Kubernetes platform, and 9 engineers to manage it on a daily basis.

A different choice might have been to trust the provisioning of the Kubernetes platform to a Kubernetes platform provider. This approach would have alleviated the risk of platform build and configuration, but would have kept the day-to-day running of the platform in-house. This sounds like a sensible approach, but still requires investing in a dedicated, skilled team. Additionally, a business needs to commit to a 24x7 operation. While backed by service levels from the platform provider, they never match the short resolution times internal teams require for production operations.

An option that might suit businesses with tight deadlines, is to choose a partner to manage the entire infrastructure on their behalf. Skills required for performing platform updates, patching, and support are no longer a burden that consumes an entire team. This leaves engineering teams to concentrate on what they are good at; optimizing and automating the delivery of application deployments and maintaining the availability of the applications in production. DevOps engineers stay focused on the core business and don’t get distracted by the infrastructure. It also removes the risks associated with vendor lock-in.


Conclusion


Of course, this is just one story that sits amongst a large number that tell of success for organizations using Kubernetes. As we’re focusing on FinTech in this discussion, it’s worth pointing out that a good number of these exciting businesses use cloud-native technologies (including Kubernetes) to run the applications that define their very business. Each has a different story to tell. What is common to the majority, is that cloud-native technology enables them to disrupt and challenge an industry that’s been set in its ways for far too long. Kubernetes saves a lot of time and effort for functionality that now is provided through the infrastructure but it doesn’t eliminate the complexity applications had to handle before.

You May Also Like

These Stories on Product

Sep 12, 2022
May 7, 2020
Mar 5, 2019