Giant Swarm Blog | Kubernetes Insights from the team at Giant Swarm

Treat the edge like infrastructure, not an exception » Giant Swarm

Written by Manuel Gawert | Feb 3, 2026

When teams tell us they’re struggling with Kubernetes at the edge, it’s rarely the cluster itself that’s failing. It’s the context.

A company wants to deploy workloads to warehouses, factories, hospitals, or substations, and they’ve decided Kubernetes is the right approach. Which, to be fair, it often is. But what they’re asking their platform team to do isn’t just “run Kubernetes in more places.” It’s to deliver the same platform experience, with the same reliability, compliance, and developer workflows, in environments that were never designed for this kind of autonomy, resilience, or scale.

That’s where things get complicated. Not because Kubernetes can’t run at the edge, it can, but because most of the assumptions behind cloud native tooling start to break. You can’t always rely on stable connectivity. You might not be able to push updates from a central control plane. Sometimes, you can’t even send telemetry back upstream. In many cases, there’s no engineer on-site to help when something goes wrong.

Still, the expectations are unchanged. Same lifecycle. Same compliance. Same control.

From a platform perspective, the ask is clear: treat the edge like it’s part of the platform. But most tools treat it like an exception. That’s when GitOps pipelines start to fork. Tooling gets duplicated. Monitoring becomes brittle and only partially automated. Security policies drift. What began as a unified platform starts to splinter into a patchwork of fragile, one-off solutions. The edge turns into a series of special cases instead of infrastructure you can trust.

We’ve seen this happen across industries like logistics, energy, healthcare, and telecom. The details vary, but the outcome is often the same. Edge deployments become an operational burden. Platform teams are stretched thin, asked to maintain dozens or hundreds of sites with tools that were never designed to scale in this way.

At Giant Swarm, we decided early on not to treat the edge as a bolt-on. Instead, we made it the default. That meant building a platform that operates in disconnected environments, where uptime matters but network access can’t be guaranteed. Configuration is pulled by the edge rather than pushed. Workloads continue running if the network drops. Edge nodes are treated as first-class Kubernetes citizens. Everything runs within the customer’s infrastructure, whether that’s in the cloud, a data center, or a sovereign environment.

That’s also why we rely on declarative and version-controlled GitOps workflows, and partner with projects like KubeEdge. And it’s why our SRE team carries the pager, even for your smallest sites.

This kind of design isn’t just about resilience. It’s about enabling real autonomy at a time when more and more business-critical operations are moving to the edge. Connected devices, AI-driven automation, and real-time data are becoming foundational. Meanwhile, the pressure to cut cloud egress costs, maintain compliance with evolving regulations, and operate with fewer people on the ground is growing quickly.

Edge is no longer a special case. It’s where business happens. And the only way to scale it is to treat it like infrastructure from the start.

During their recent talk, our colleagues Antonia and Xavier put it well:

 “Running Kubernetes at the edge isn’t just about scaling infrastructure. It’s about making autonomy possible at scale without compromising the platform.”

That’s the mindset we’ve adopted. And it’s what we’re building toward.

Platform teams today are being asked to support a growing range of environments. Some of them are challenging. Some are remote. But none of them should require a separate set of assumptions. If you’re responsible for scaling and supporting infrastructure that spans cloud, data center, and edge, you deserve tools, and partners, that help you do it with clarity and consistency.