Logging in Kubernetes: why we are bullish on Loki

Łukasz Piątkowski

• Apr 22, 2021

 

TL;DR: We are bullish on Loki — and we have six reasons why we (and maybe you) should be. Our Kubernetes Platform Architect and epic series writer for this blog Łukasz Piątkowski explains. 

For over two years now, we’ve been following Loki. We’ve mentioned it as part of an alternative to the ELK or EFK stack in our Guide to Your Cloud-Native Stack. We’ve investigated the value of using Loki with Grafana. We’ve even made heavy use of it in the series I wrote where (amongst other topics) I extensively researched Loki to find out which logging tool to offer our customers. Recently, following my participation in the webinar by Grafana (maintainers of Loki) on deploying it internally, and then putting in the work to compare it with our managed EFK, the verdict is in: we are bullish on Loki, and here are six reasons why. 

1. It uses the Apache 2.0 License.

This means that it’s open source and there’s no lock-in. This is table stakes for us. 

2. It’s easy for Prometheus users to get started.

Since Loki is “like Prometheus, just for logs” and most of our customers know and use Prometheus, it’s easier for them to get started with Loki. Loki uses the same concept of streams and labels. It just applies them to lines of text coming from log files and not to numeric data from metrics.

3. It has native integration with Grafana. 

Grafana is the de facto standard for open source monitoring dashboards. Most of our customers use it as well. Loki’s native integration with Grafana means you can just add it as a data source and you're ready to go to query your log data.

Moreover, you can create a unified Grafana dashboard from mixed data sources. You can have Grafana dashboards that show graphs of your metrics, lists with your logs, and graphs based on your logs. If you use a tracing solution, you can even configure Grafana to jump to your traces from your logs and mix everything up as you need.

4. Its architecture offers a lot of benefits. 

Loki’s architecture allows for good performance, low storage costs, and separation that allows for selective scaling and easier debugging. Loki takes a different approach to log storage and index compared to the majority of log servers.

Instead of ingesting everything possible into an index, Loki prefers to have a small (low cardinality) index. This index is then used to look up chunks of data where full log entries are stored. When a query is executed, chunks are sent for processing by a distributed grep-like service that processes client requests. This allows good performance to be maintained while the small index and simple shards storage mean only a low cost of storage is required to keep your data.

It also has very simple storage requirements. In Loki 2.0, it’s enough to have S3 compatible storage. You don’t need additional document databases. Because of both the small index and the ability to work with very cost-effective storage like S3, Loki can be really cost-effective.

5. It has microservice-oriented architecture. 

A great feature of Loki’s architecture is that the reading and writing processes use independent data paths. The services responsible for ingesting logs into the system are almost completely separated from the services performing search queries. This means it’s easier to debug and scale the system. The whole system is separated into services you can scale selectively depending on what your usage pattern looks like.

What else? It’s multi-tenant by default. All tenants have hard data separation. It doesn’t use resource-heavy frameworks like Java. It’s better suited to run natively in Kubernetes. Meaning: shorter startup times. It works over plain HTTP. It’s fully distributed and scalable. There's no ‘single point of failure’ architecture.

And lastly, for architecture, its dedicated logging agent Promtail is made from the ground up with the Prometheus-inspired idea of labels and label-related operations (filtering, rewrites, and matches).

6. Querying and monitoring.

LogQL is really flexible and is made after PromQL. Queries result not only in a series of matching log entries but also in numeric data series that can be based on aggregates such as average or P95 of a distribution. This makes it easy to get metrics and graphs from logs.

Loki has the ability to turn LogQL queries directly into Prometheus metrics without any additional tools.

And lastly, Loki allows you to define alerts over your LogQL metrics. And since it is Prometheus compatible, you can push them to Alertmanager.

Interested in using Loki? Not using anything for logs yet? Giant Swarm now offers managed Loki. Why not give it a try. Slack your Account Engineer to try it today or let us know if we can help!

Related Articles

Subscribe Here!