• Apr 26, 2022
Software supply chain integrity is a hot topic at the moment, with a considerable amount of effort being directed toward creating and promoting preventive tools and techniques. This is not surprising given the number of recent high-profile incidents that can be traced back to nefarious supply chain meddling. The SolarWinds Sunburst exploit, which occurred less than two years ago, had far-reaching consequences, while the Log4j compromise at the end of last year serves as a fresh reminder of the importance of supply chain awareness. There are countless other examples.
But this is not a new problem, so why is it still so difficult to defend against this type of attack? The answer is multi-faceted. We rely on a lot of third-party library dependencies as we author software; sometimes these are direct dependencies, but all too frequently there are transitive dependencies at play too. Do we know what these dependencies actually do? Can we be sure they’ve been authored by whoever claims to have authored them? Are we confident they haven’t been tampered with by a man-in-the-middle attack on download? Trying to manually manage the provenance and integrity of these many disparate code sources is a task akin to painting the Forth Bridge. In an ideal world, we should be able to easily verify the authenticity and integrity of these dependencies. And, we, in turn, need to provide the same transparency when it comes to making our software available to would-be consumers. The assumed norm is that we digitally sign the software artifacts that we produce in order to provide this transparency. But this necessitates us being ninjas in cryptography; software engineers want to be experts in writing code, not in the intricacies of hash functions, crypto toolkits, and their esoteric command line incantations. Even given a set of tools, there is no guarantee that users across the industry will adhere to the same conventions when publishing signatures for their own software or consuming those of others. The effort required thus far to achieve widespread adoption of signing is what’s made it so difficult to insulate ourselves from supply chain attacks. It’s easier just not to bother.
Before we delve into recent developments that seek to alleviate some of the headaches in securing the software supply chain, let’s quickly address the basic needs. In Kubernetes environments, where containerized applications are the packaging medium of choice, software developers should ideally a) create a software bill of materials (SBOM) to reflect the content of the container image, and b) digitally sign the image in order to allow a consumer to prove the authenticity and integrity of the image. You can find out more about the rationale for SBOMs here, and an in-depth explanation of digital signing can be found here.
As we’ve already suggested, the process involved in generating these artifacts is not without its complexity, and the ongoing maintenance is irksome, to say the least. Generating an SBOM for an artifact that contains a large number of dependencies can be difficult, and this list must be perpetually kept up to date as dependencies change. But the story doesn’t end at build time. Management of the keys used for signing build artifacts adds complexity long after the software has shipped. Private keys from an asymmetric pair must be kept safe, but will eventually need to be rotated, perhaps on a regular basis, after the departure of company alumni, or after a compromise. It’s just as well, then, that a number of parties with a vested interest have recently risen to the challenge of demystifying and simplifying the software supply chain management challenge.
Eliminating the need to manage asymmetric key pairs and the associated X.509 certificate from the signing process would take a lot of the pain away from signing container images (or any other artifact, for that matter). That’s exactly what ‘keyless signing’, a technique that has recently emerged from the Sigstore ecosystem, hopes to achieve. The term ‘keyless’ is a bit of a misnomer, however, because asymmetric keys and an X.509 certificate are still essential ingredients in the signing process. However, instead of relying on traditional long-lived keys and certificates, these essential ingredients take on an ephemeral existence. Under a keyless signing flow, artifacts are signed using keys that are valid only for the time needed to actually perform the signature – around twenty minutes. The rest of the Sigstore ecosystem in turn supports verifying signatures well into the future without the headache of key revocation, rotation, and certificate renewal. Perfect! Let’s discuss the tools that allow this to happen.
Cosign is a tool that enables a developer to sign their container images and push the signatures to an OCI registry, where they can co-exist with that image. This enables downstream image consumers to verify the signatures or other attestations associated with the image. Cosign allows you to bring your own keys, or it can generate a new, long-lived pair for you to use. But better still, it can also relieve you of worrying about keys altogether, by generating an ephemeral key pair during the signing process, which exists in memory only, and never touches a hard drive.
During the keyless signing process, Cosign makes use of another Sigstore component, Fulcio. Fulcio is a fully-fledged, public, free-to-use root Certificate Authority (CA), which issues a signing certificate based on a signer’s identity. In terms of what they’re trying to achieve, the Sigstore project draws a parallel with the Let’s Encrypt CA (and the ACME protocol) and what it sought to provide in democratizing the acquisition and management of TLS certificates. In other words, it shouldn’t be difficult or expensive to acquire a certificate for digital signing. Instead of proving control of a domain name, as with an ACME challenge, the signer must prove their identity, so that the identity can be safely associated with a signing certificate. Fulcio uses OpenID Connect to achieve this, using GitHub, Google, Microsoft, or your identity provider of choice. These 'large-scale' identity providers are trusted by the Sigstore root for any email address that they can issue a token for. But, if you'd prefer to establish your own OIDC endpoint for a given domain, then the Fulcio project is open to requests for adding the domain to the trust flow. It is also possible to self-host Fulcio to run an entirely private signing ecosystem. Either way, Fulcio exchanges the token provided by the identity provider for a signing certificate, with the identity reflected in the certificate as the
$ export COSIGN_EXPERIMENTAL=1
$ cosign sign st0nez/nginxhello:signed
Generating ephemeral keys...
Retrieving signed certificate...
Your browser will now be opened to:
Successfully verified SCT...
tlog entry created with index: 1936653
Pushing signature to: st0nez/nginxhello
So, with Cosign and Fulcio working in conjunction, we get a signing certificate and we can sign container images with ease. But the ephemeral nature of the keys and certificate raises another question: How can a consumer of the signed image verify the signature, if the validity of the certificate and keys have expired? This is where the third component of the triumvirate comes into play; it’s called Rekor. Rekor provides an immutable, public, tamper-proof log of signatures, which Cosign will append to when it signs a container image on our behalf. It uploads an entry to Rekor’s ledger, which contains all that’s needed to verify the authenticity and content of the signed container image. Later in time, even though the validity of the certificate and keys may have expired, an examination of the logged signature provides evidence that at the time of signing, the signature was indeed valid. They can be reliably used to verify the signature, which is another feature that Cosign can carry out on our behalf.
$ cosign verify st0nez/nginxhello:signed | jq '.'
Verification for index.docker.io/st0nez/nginxhello:signed --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- Existence of the claims in the transparency log was verified offline
- Any certificates were verified against the Fulcio roots.
"type": "cosign container image signature"
"body": "eyJhcGlWZXJzaW9uIjo ifX19fQ==",
In an ideal world, we’d want the creation and verification of signed container images to be part of an automated workflow. In addition to building and testing an application as a container image, a build system might also run a vulnerability scan and then sign the image after it’s pushed to a registry. Using an in-toto attestation, we could even go so far as to sign that vulnerability scan report to provide stronger guarantees to users that our image was scanned by a trusted party. At the other end of the delivery pipeline, it would be expedient to only allow workloads whose images have been signed to be deployed to a Kubernetes cluster. Verification of a container image’s signature at the point of deployment should give us assurance that it was built by a trusted party and has not been tampered with between push and pull.
Such a requirement is effectively a policy statement, and policy enforcement is a capability that the Kubernetes community is working hard to establish. One of the vanguard tools in this area is Kyverno, which is an admission controller that is able to allow or deny Kubernetes API requests based on policy rules that are expressed as Kubernetes custom resources. One of the policy checks that Kyverno offers is the verification of a pod’s signed image against a provided public key, using the
- name: check-image
- image: "st0nez/nginxhello:signed"
-----BEGIN PUBLIC KEY-----
-----END PUBLIC KEY-----
This is akin to running the
cosign verify command in a terminal. If the image’s signature can be verified by Kyverno using the public key, the workload can be admitted to the cluster, but it will be rejected if the verification fails. Similar policies can be used to enforce other admission requirements, such as requiring images to be pulled from known trusted registries, or requiring that images undergo malware scans prior to admission. Using Sigstore in conjunction with these Kyverno policies allows administrators tighter control of image provenance in a cluster.
The Sigstore project and its tools provide an important, significant step in the right direction for securing the software supply chain. And, while Sigstore as a whole service is not even GA yet, it has already signed more than 1.5 million artifacts. Based on the surge of community support and adoption it has seen thus far, we expect there will be much more to come from this project in the future.
Sigstore is not the end of the story, however, as evidenced by the work conducted by the CNCF Security Technical Advisory Group and summarized in their Software Supply Chain Best Practices whitepaper. Projects like In-Toto and the Tekton sub-project, Tekton Chains, are targeted at securing the software supply chain across the entire software development lifecycle; from the code right through to the deployment of applications in production. There’s a long way to go, but Sigstore, and keyless signing and verification, are great advances on the journey.