Without continuous improvement in software security, you’re not standing still — you’re walking backward into oncoming traffic. Attack vectors multiply, evolve, and look for the weakest link in your software supply chain daily.
Cybersecurity Ventures forecasts that the global cost of software supply chain attacks will reach nearly $138 billion by 2031, up from $60 billion in 2025 and $46 billion in 2023. A single overlooked vulnerability isn’t just a flaw; it’s an open invitation for compromise, potentially threatening your entire system. The cost of a breach doesn’t stop with your software — it extends to your reputation and customer trust, which are far harder to rebuild.
In this post, we’ll explore how these tools provide built-in security, governance, and visibility, helping your team innovate faster while staying protected.
Securing the supply chain
Your software supply chain isn’t just an automated sequence of tools and processes. It’s a promise — to your customers, team, and future. Promises are fragile. The cracks can start to show with every dependency, third-party integration, and production push. Tools like Image Access Management help protect your supply chain by providing granular control over who can pull, share, or modify images, ensuring only trusted team members access sensitive assets. Meanwhile, Hardened Docker Desktop ensures developers work in a secure, tamper-proof environment, giving your team confidence that development is aligned with enterprise security standards. The solution isn’t to slow down or second-guess; it’s to continuously improve on securing your software supply chain, such as automated vulnerability scans and trusted content from Docker Hub.
A breach is more than a line item in the budget. Customers ask themselves, “If they couldn’t protect this, what else can’t they protect?” Downtime halts innovation as fines for compliance failures and engineering efforts re-route to forensic security analysis. The brand you spent years perfecting could be reduced to a cautionary tale. Regardless of how innovative your product is, it’s not trusted if it’s not secure.
Organizations must stay prepared by regularly updating their security measures and embracing new technologies to outpace evolving threats. As highlighted in the article Rising Tide of Software Supply Chain Attacks: An Urgent Problem, software supply chain attacks are increasingly targeting critical points in development workflows, such as third-party dependencies and build environments. High-profile incidents like the SolarWinds attack have demonstrated how adversaries exploit trust relationships and weaknesses in widely used components to cause widespread damage.
Preventing security problems from the start
Preventing attacks like the SolarWinds breach requires prioritizing code integrity and adopting secure software development practices. Tools like Docker Scout seamlessly integrate security into developers’ workflows, enabling proactive identification of vulnerabilities in dependencies and ensuring that trusted components form the backbone of your applications.
Docker Hub’s trusted content and Docker Scout’s policy evaluation features help ensure that your organization uses compliant and secure images. Docker Official Images (DOI) provide a robust foundation for deployments, mitigating risks from untrusted components. To extend this security foundation, Image Access Management allows teams to enforce image-sharing policies and restrict access to sensitive components, preventing accidental exposure or misuse. For local development, Hardened Docker Desktop ensures that developers operate in a secure, enterprise-grade environment, minimizing risks from the outset. This combination of tools enables your engineering team to put out fires and, more importantly, prevent them from starting in the first place.
Building guardrails
Governance isn’t a roadblock; it’s the blueprint for progress. The problem is that some companies treat security like a fire extinguisher — something you grab when things go wrong. That is not a viable strategy in the long run. Real innovation happens when security guardrails are so well-designed that they feel like open highways, empowering teams to move fast without compromising safety.
A structured policy lifecycle loop — mapping connections, planning changes, deploying cleanly, and retiring the dead weight — turns governance into your competitive edge. Automate it, and you’re not just checking boxes; you’re giving your teams the freedom to move fast and trust the road ahead.
Continuous improvement on security policy management doesn’t have to feel like a bureaucratic chokehold. Docker provides a streamlined workflow to secure your software supply chain effectively. Docker Scout integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and detailed reports and recommendations to help teams address issues before code reaches production.
With the introduction of Docker Health Scores — a security grading system for container images — teams gain a clear and actionable snapshot of their image security posture. These scores empower developers to prioritize remediation efforts and continuously improve their software’s security from code to production.
Keeping up with continuous improvement
Security threats aren’t slowing down. New attack vectors and vulnerabilities grow every day. With cybercrime costs expected to rise from $9.22 trillion in 2024 to $13.82 trillion by 2028, organizations face a critical choice: adapt to this evolving threat landscape or risk falling behind, exposing themselves to escalating costs and reputational damage. Continuous improvement in software security isn’t a luxury. Building and maintaining trust with your customers is essential so they know that every fresh deployment is better than the one that came before. Otherwise, expect high costs due to imminent software supply chain attacks.
Best practices for securing the software supply chain involve integrating vulnerability scans early in the development lifecycle, leveraging verified content from trusted sources, and implementing governance policies to ensure consistent compliance standards without manual intervention. Continuous monitoring of vulnerabilities and enforcing runtime policies help maintain security at scale, adapting to the dynamic nature of modern software ecosystems.
Start today
Securing your software supply chain is a journey of continuous improvement. With Docker’s tools, you can empower your teams to build and deploy software securely, ensuring vulnerabilities are addressed before they become liabilities.
Don’t wait until vulnerabilities turn into liabilities. Explore Docker Hub, Docker Scout, Hardened Docker Desktop, and Image Access Management to embed security into every stage of development. From granular control over image access to tamper-proof local environments, Docker’s suite of tools helps safeguard your innovation, protect your reputation, and empower your organization to thrive in a dynamic ecosystem.
Learn more
Docker Scout: Integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and actionable recommendations to address issues before they reach production.
Docker Health Scores: A security grading system for container images, offering teams clear insights into their image security posture.
Docker Hub: Access trusted, verified content, including Docker Official Images (DOI), to build secure and compliant software applications.
Docker Official Images (DOI): A curated set of high-quality images that provide a secure foundation for your containerized applications.
Image Access Management (IAM): Enforce image-sharing policies and restrict access to sensitive components, ensuring only trusted team members access critical assets.
Hardened Docker Desktop: A tamper-proof, enterprise-grade development environment that aligns with security standards to minimize risks from local development.
As security threats become more and moreprevalent, building software with security top of mind is essential. Security has become an increasing concern for container workloads specifically and, commensurately, for container base-image choice. Many conversations around choosing a secure base image focus on CVE counts, but security involves a lot more than that.
One organization that has been leading the way in secure software development is the Debian Project. In this post, I will outline how and why Debian operates as a secure basis for development.
For more than 30 years, Debian’s diverse group of volunteers has provided a free, open, stable, and secure GNU/Linux distribution. Debian’s emphasis on engineering excellence and clean design, as well as its wide variety of packages and supported architectures, have made it not only a widely used distribution in its own right but also a meta-distribution. Many other Linux distributions, such as Ubuntu, Linux Mint, and Kali Linux, are built on top of Debian, as are many Docker Official Images (DOI). In fact, more than 1,000 Docker Official Images variants use the debian DOI or the Debian-derived ubuntu DOI as their base image.
Why Debian?
As a bit of a disclaimer, I have been using Debian GNU/Linux for a long time. I remember installing Debian from floppy disks in the 1990s on a PC that I cobbled together, and later reinstalling so I could test prerelease versions of the netinst network installer. Installing over the network took a while using a 56-kbps modem. At those network speeds, you had to be very particular about which packages you chose in dselect.
Having used a few other distributions before trying Debian, I still remember being amazed by how well-organized and architected the system was. No dangling or broken dependencies. No download failures. No incompatible shared libraries. No package conflicts, but rather a thoughtful handling of packages providing similar functionality.
Much has changed over the years, no more floppies, dselect has been retired, my network connection speed has increased by a few orders of magnitude, and now I “install” Debian via docker pull debian. What has not changed is the feeling of amazement I have toward Debian and its community.
Open source software and security
Despite the achievements of the Debian project and the many other projects it has spawned, it is not without detractors. Like many other open source projects, Debian has received its share of criticsm in the past few years by opportunists lamenting the state of open source security. Writing about the software supply chain while bemoaning high-profile CVEs and pointing to malware that has been uploaded to an open source package ecosystem, such as PyPI or NPM, has become all too common.
The pernicious assumption in such articles is that open source software is the problem. We know this is not the case. We’ve been through this before. Back when I was installing Debian over a 56-kbps modem, all sorts of fear, uncertainty, and doubt (FUD) was being spread by various proprietary software vendors. We learned then that open source is not a security problem — it is a security solution.
Being open source does not automatically convey an improved security status compared to closed-source software, but it does provide significant advantages. In his Secure Programming HOWTO, David Wheeler provides a balanced summary of the relationship between open source software and security. A purported advantage conveyed by closed-source software is the nondisclosure of its source code, but we know that security through obscurity is no security at all.
The transparency of open source software and open ecosystems allows us to better know our security posture. Openness allows for the rapid identification and remediation of vulnerabilities. Openness enables the vast majority of the security and supply chain tooling that developers regularly use. How many closed-source tools regularly publish CVEs? With proprietary software, you often only find out about a vulnerability after it is too late.
Debian’s rapid response strategy
Debian has been criticized for moving too slowly on the security front. But this narrative, like the open vs. closed-source narrative, captures neither the nuance nor reality. Although several distributions wait to publish CVEs until a fixed version is available, Debian opts for complete transparency and urgency when communicating security information to its users.
Furthermore, Debian maintainers are not a mindless fleet of automatons hastily applying patches and releasing new package versions. As a rule, Debian maintainers are experts among experts, deeply steeped in software and delivery engineering, open source culture, and the software they package.
zlib vulnerability example
A recent zlib vulnerability, CVE-2023-45853, provides an insightful example of the Debian project’s diligent, thorough approach to security. Several distributions grabbed a patch for the vulnerability, applied it, rebuilt, packaged, and released a new zlib package. The Debian security community took a closer look.
As mentioned in the CVE summary, the vulnerability was in minizip, which is a utility under the contrib directory of the zlib source code. No minizip source files are compiled into the zlib library, libz. As such, this vulnerability did not actually affect any zlib packages.
If that were where the story had ended, the only harm would be in updating a package unnecessarily. But the story did not end there. As detailed in the Debian bug thread, the offending minizip code was copied (i.e., vendored) and used in a lot of other widely used software. In fact, the vendored minizip code in both Chromium and Node.js was patched about a month before the zlib CVE was even published.
Unfortunately, other commonly used software packages also had vendored copies of minizip that were still vulnerable. Thanks to the diligence of the Debian project, either the patch was applied to those projects as well, or they were compiled against the patched system minizip (not zlib!) dev package rather than the vendored version. In other distributions, those buggy vendored copies are in some cases still being compiled into software packages, with nary a mention in any CVE.
Thinking beyond CVEs
In the past 30 years, we have seen an astronomical increase in the role open source software plays in the tech industry. Despite the productivity gains that software engineers get by leveraging the massive amount of high-quality open source software available, we are once again hearing the same FUD we heard in the early days of open source.
The next time you see an article about the dangers lurking in your open source dependencies, don’t be afraid to look past the headlines and question the assumptions. Open ecosystems lead to secure software, and the Debian project provides a model we would all do well to emulate. Debian’s goal is security, which encompasses a lot more than a report showing zero CVEs. Consumers of operating systems and container images would be wise to understand the difference.
So go ahead and build on top of the debian DOI. FROM debian is never a bad way to start a Dockerfile!
Docker Official Images are a curated set of Docker repositories hosted on Docker Hub that provide a wide range of pre-configured images for popular language runtimes and frameworks, cloud-first utilities, data stores, and Linux distributions. These images are maintained and vetted, ensuring they meet best practices for security, usability, and versioning, making it easier for developers to deploy and run applications consistently across different environments.
Docker Official Images are an important component of Docker’s commitment to the security of both the software supply chain and open source software. Docker Official Images provide thousands of images you can use directly or as a base image when building your own images. For example, there are Docker Official Images for Alpine Linux, NGINX, Ubuntu, PostgreSQL, Python, and Node.js. Visit Docker Hub to search through the currently available Docker Official Images.
In this blog post, we address three common misconceptions about Docker Official Images and outline seven ways they help secure the software supply chain.
3 common misconceptions about Docker Official Images
Even though Docker Official Images have been around for more than a decade and have been used billions of times, they are somewhat misunderstood. Who “owns” Docker Official Images? What is with all those tags? How should you use Docker Official Images? Let’s address some of the more common misconceptions.
Misconception 1: Docker Official Images are controlled by Docker
Docker Official Images are maintained through a partnership between upstream maintainers, community volunteers, and Docker engineers. External developers maintain the majority of Docker Official Images Dockerfiles, with Docker engineers providing insight and review to ensure best practices and uniformity across the Docker Official Images catalog. Additionally, Docker provides and maintains the Docker Official Images build infrastructure and logic, ensuring consistent and secure build environments that allow Docker Official Images to support more than 10 architecture/operating system combinations.
Misconception 2: Docker Official Images are designed for a single use case
Most Docker Official Images repositories offer several image variants and maintain multiple supported versions. In other words, the latest tag of a Docker Official Image might not be the right choice for your use case.
Docker Official Images tags
The documentation for each Docker Official Images repository contains a “Supported tags and respective Dockerfile links” section that lists all the current tags with links to the Dockerfiles that created the image with those tags (Figure 1). This section can be a little intimidating for first-time users, but keeping in mind a few conventions will allow even novices to understand what image variants are available and, more importantly, which variant best fits their use case.
Figure 1: Documentation showing the current tags with links to the Dockerfiles that created the image with those tags.
Tags listed on the same line all refer to the same underlying image. (Multiple tags can point to the same image.) For example, Figure 1 shows the ubuntu Docker Official Images repository, where the 20.04, focal-20240216, and focal tags all refer to the same image.
Often the latest tag for a Docker Official Images repository is optimized for ease of use and includes a wide variety of software helpful, but not strictly necessary, when using the main software packaged in the Docker Official Image. For example, latest images often include tools like Git and build tools. Because of their ease of use and wide applicability, latest images are often used in getting-started guides.
Some operating system and language runtime repositories offer “slim” variants that have fewer packages installed and are therefore smaller. For example, the python:3.12.2-bookworm image contains not only the Python runtime, but also any tool you might need to build and package your Python application — more than 570 packages! Compare this to the python:3.12.2-slim-bookworm image, which has about 150 packages.
Many Docker Official Images repositories offer “alpine” variants built on top of the Alpine Linux distribution rather than Debian or Ubuntu. Alpine Linux is focused on providing a small, simple, and secure base for container images, and Docker Official Images alpine variants typically aim to install only necessary packages. As a result, Docker Official Images alpine variants are typically even smaller than “slim” variants. For example, the linux/amd64 node:latest image is 382 MB, the node:slim image is 70 MB, and the node:alpine image is 47 MB.
If you see tags with words that look like Toy Story characters (for example, bookworm, bullseye, and trixie) or adjectives (such as jammy, focal, and bionic), those indicate the codename of the Linux distribution they use as a base image. Debian-release codenames are based on Toy Story characters, and Ubuntu releases use alliterative adjective-animal appellations. Linux distribution indicators are helpful because many Docker Official Images provide variants built upon multiple underlying distribution versions (for example, postgres:bookworm and postgres:bullseye).
Tags may contain other hints to the purpose of their image variant. Often these are explained later in the Docker Official Images repository documentation. Check the “How to use this image” and/or “Image Variants” sections.
Misconception 3: Docker Official Images do not follow software development best practices
Some critics argue that Docker Official Images go against the grain of best practices, such as not running container processes as root. While it’s true that we encourage users to embrace a few opinionated standards, we also recognize that different use cases require different approaches. For example, some use cases may require elevated privileges for their workloads, and we provide options for them to do so securely.
7 ways Docker Official Images help secure the software supply chain
We recognize that security is a continuous process, and we’re committed to providing the best possible experience for our users. Since the company’s inception in 2013, Docker has been a leader in the software supply chain, and our commitment to security — including open source security — has helped to protect developers from emerging threats all along the way.
With the availability of open source software, efficiently building powerful applications and services is easier than ever. The transparency of open source allows unprecedented insight into the security posture of the software you create. But to take advantage of the power and transparency of open source software, fully embracing software supply chain security is imperative. A few ways Docker Official Images help developers build a more secure software supply chain include:
Open build process
Because visibility is an important aspect of the software supply chain, Docker Official Images are created from a transparent and open build process. The Dockerfile inputs and build scripts are all open source, all Docker Official Images updates go through a public pull request process, and the logs from all Docker Official Images builds are available to inspect (Jenkins / GitHub Actions).
Principle of least privilege
The Docker Official Images build system adheres strictly to the principle of least privilege (POLP), for example, by restricting writes for each architecture to architecture-specific build agents.
Updated build system
Ensuring the security of Docker Official Images builds and images is paramount. The Docker Official Images build system is kept up to date through automated builds, regular security audits, collaboration with upstream projects, ongoing testing, and security patches.
Vulnerability reports and continuous monitoring
Courtesy of Docker Scout, vulnerability insights are available for all Docker Official Images and are continuously updated as new vulnerabilities are discovered. We are committed to continuously monitoring our images for security issues and addressing them promptly. For example, we were among the first to provide reasoned guidance and remediation for the recent xz supply chain attack. We also use insights and remediation guidance from Docker Scout, which surfaces actionable insights in near-real-time by updating CVE results from 20+ CVE databases every 20-60 minutes.
Software Bill of Materials (SBOM) and provenance attestations
We are committed to providing a complete and accurate SBOM and detailed build provenance as signed attestations for all Docker Official Images. This allows our users to have confidence in the origin of Docker Official Images and easily identify and mitigate any potential vulnerabilities.
Signature validation
We are working on integrating signature validation into our image pull and build processes. This will ensure that all Docker Official Images are verified before use, providing an additional layer of security for our users.
Increased update frequency
Docker Official Images provide the best of both worlds: the latest version of the software you want, built upon stable versions of Linux distributions. This allows you to use the latest features and fixes of the software you are running without having to wait for a new package from your Linux distribution or being forced to use an unstable version of your Linux distribution. Further, we are working to increase the throughput of the Docker Official Images build infrastructure to allow us to support more frequent updates for larger swaths of Docker Official Images. As part of this effort, we are piloting builds on GitHub Actions and Docker Build Cloud.
Conclusion
Docker’s leadership in security and protecting open source software has been established through Docker Official Images and other trusted content we provide our customers. We take a comprehensive approach to security, focusing on best practices, tooling, and community engagement, and we work closely with upstream projects and SIGs to address security issues promptly and proactively.
Docker Official Images provide a flexible and secure way for developers to build, ship, test, and run their applications. Docker Official Images are maintained through a partnership between the Docker Official Images community, upstream maintainers/volunteers, and Docker engineers, ensuring best practices and uniformity across the Docker Official Images catalog. Each Docker Official Image offers numerous image variants that cater to different use cases, with tags indicating the purpose of each variant.
Developers can build using Docker tools and products with confidence, knowing that their applications are built on a secure, transparent foundation.
Looking to dive in? Get started building with Docker Official Images today.
In this post, we walk you through the updated DOI signing strategy. We start with how basic container image signing works and gradually build up to what is currently a common image signing flow, which involves public/private key pairs, certificate authorities, the Update Framework (TUF), timestamp logs, transparency logs, and identity verification using Open ID Connect.
After describing these mechanics, we show how OpenPubkey, with a few recent enhancements included, can be leveraged to smooth the flow and decrease the number of third-party entities the verifier is required to trust.
Hopefully, this incremental narrative will be useful to those new to software artifact signing and those just looking for how this proposal differs from current approaches. As always, Docker is committed to improving the developer experience, increasing the time developers spend on adding value, and decreasing the amount of time they spend on toil.
The approach described in this post aims to allow Docker users to improve the security of their software supply chain by making it easier to verify the integrity and origin of the DOI images they use every day.
Signing container images
An entity can prove that it built a container image by creating a digital signature and adding it to the image. This process is called signing. To sign an image, the entity can create a public/private key pair. The private key must be kept secret, and the public key can be shared publicly.
When an image is signed, a signature is produced using the private key and the digest of the image. Anyone with the public key can then validate that the signature was created by someone who has the private key (Figure 1).
Figure 1: An image is signed using a private key, resulting in a signed image. As a next step, the image’s signature is verified using the corresponding public key to confirm its authenticity.
Let’s walk through how container images can be signed, starting with a naive approach, building up to the current status quo in image signing, and ending with Docker’s proposed solution. We’ll use signing Docker Official Images (DOI) as part of the DOI build process as our example since that is the use case for which this solution has been designed.
In the diagrams throughout this post, we’ll use colored seals to represent signatures. The color of the seal matches the color of the private key it was signed with (Figure 2).
Figure 2: Two distinct private keys, labeled 1234 (red) and 5678 (yellow), generate corresponding unique signatures.
Note that all the verifier knows after verifying an image signature with a public key is that the image was signed with the private key associated with the public key. To trust the image, the verifier must verify the signature and the identity of the key pair owner (Figure 3).
Figure 3: DOI builder pushing a signed image to the registry and verifier pulling the same image. At this point, the verifier only knows what key signed the image, but not who controls the key.
Identity and certificates
How do you verify the owner of a public/private key pair? That is the purpose of a certificate, a simple data structure including a public key and a name. The certificate binds the name, known as the subject, to the public key. This data structure is normally signed by a Certificate Authority (CA), known as the issuer of the certificate.
Certificates can be distributed alongside signatures that were made with the corresponding key. This means that consumers of images don’t need to verify the owner of every public key used to sign any image. They can instead rely on a much smaller set of CA certificates. This is analogous to the way web browsers have a set of a few dozen root CA certificates to establish trust with a myriad of websites using HTTPS.
Going back to the example of DOI signing, if we distribute a certificate binding the 1234 public key with the Docker Official Images (DOI) builder name, anybody can verify that an image signed by the 1234 private key was signed by the DOI builder, as long as they trust the CA that issued the certificate (Figure 4).
Figure 4: DOI builder provides proof of identity to a Certificate Authority (CA), which provides a certificate back. DOI builder pushes a signed image and certificate to the registry. The verifier is able to verify the signed image and that image was created by DOI builder.
Trust policy
Certificates solve the problem of which public keys belong to which entities, but how do we know which entity was supposed to sign an image? For this, we need trust policy, some signed metadata detailing which entities are allowed to sign an image. For Docker Official Images, trust policy will state that our DOI build servers must sign the images.
We need to ensure that trust policy is updated in a secure way, because if a malicious party can change a policy, then they can trick clients into believing the malicious party’s keys are allowed to sign images they otherwise should not be allowed to sign. To ensure secure trust policy updates, we will use The Update Framework (TUF) (specification), a mechanism for securely distributing updates to arbitrary files.
A TUF repository uses a hierarchy of keys to sign manifests of files in a repository. File indexes, called manifests, are signed with keys that are kept online to enable automation, and the online signing keys are signed with offline root keys. This enables the repository to be recovered in case of online key compromise.
A client that wants to download an update to a file in a TUF repository must first retrieve the latest copy of the signed manifests and make sure the signatures on the manifests are verified. Then they can retrieve the actual files.
Once a TUF repository has been created, it can be distributed by any means we choose, even if the distribution mechanism is not trusted. We will distribute it using the Docker Hub registry (Figure 5).
Figure 5: TUF repository provides a Trust Policy that says the image should be signed by DOI builder. DOI builder provides proof of identity to a Certificate Authority (CA), which provides a certificate back. DOI builder pushes signed image, certificate from the CA, and TUF policy to the registry. The verifier is able to verify the signed image and that the image was created by the identity defined in the Trust Policy.
Certificate expiry and timestamping
In the preceding section, we described a certificate as simply a binding from an identity to a public key. In reality, certificates do contain some additional data. One important detail is the expiry time. Usually, certificates should not be trusted after their expiry time. Signatures on images (as in Figure 5) will only be valid until the attached certificate’s expiry time. A limited life span for a signature isn’t desirable because we want images to be long-lasting (longer-lasting than a certificate).
This problem can be solved by using a Timestamp Authority (TSA). A TSA will receive some data, bundle the data with the current time, and sign the bundle before returning it. Using a TSA allows anybody who trusts the TSA to verify that the data existed at the bundled time.
We can send the signature to a TSA and have it bundle the current timestamp with the signature. Then, we can use the bundled timestamp as the ‘current time’ when verifying the certificate. The timestamp proves that the certificate had not expired at the time the signature was created. The TSA’s certificate will also expire, at which point all of the signed timestamps they’ve created will also expire. TSA certificates typically last for a long time (10+ years)(Figure 6).
Figure 6: DOI builder provides proof of identity to a Certificate Authority (CA), which provides a certificate back. DOI builder sends the image signature to the Timestamping Authority (TSA), which provides a signed bundle with the signature and the current time. DOI builder pushes the signed image, certificate from CA, and the bundle signed by the TSA to the registry. The verifier is able to verify the signed image and that the image was created by DOI builder at a specific time.
OpenID Connect
Thus far, we’ve ignored how the CA verifies the signer’s identity (the “proof of ID” box in the preceding diagrams). How this verification works depends on the CA, but one approach is to outsource this verification to a third-party using OpenID Connect (OIDC).
We won’t describe the entire OIDC flow, but the primary steps are:
The signer authenticates with the OIDC provider (e.g., Google, GitHub, or Microsoft).
The OIDC provider issues an ID token, which is a signed token that the signer can use to prove their identity.
The ID token includes an audience, which specifies the intended party that should use the ID token to verify the identity of the signer. The intended audience will be the Certificate Authority. The ID token must be rejected by any other audience.
The CA must trust the OIDC provider and understand how to verify the ID token’s audience claim.
OIDC ID tokens are signed using the OIDC provider’s private key. The corresponding public key is distributed from a discoverable HTTP endpoint hosted by the OIDC provider.
Signed DOI will be built using GitHub Actions, and GitHub Actions can automatically authenticate build processes with the GitHub Actions OIDC provider, making ID tokens available to build processes (Figure 7).
Figure 7: Using OIDC, DOI builder verifies its identity to GitHub Actions, which provides a token the DOI builder sends to the CA to verify its identity. The CA verifies the token with GitHub Actions and provides a certificate back to the DOI builder.
Key compromise
We mentioned at the start of this post that the private keys must be kept private for the system to remain secure. If the signer’s private key becomes compromised, a malicious party can create signatures that can be verified as being signed by the signer.
Let’s walk through a few ways to mitigate the risk of these keys becoming compromised.
Ephemeral keys
A nice way to reduce the risk of compromise of private keys is to not store them anywhere. Key pairs can be generated in memory, used once, and then the private key can be discarded. This means that certificates are also single-use, and a new certificate must be requested from the CA every time a signature is created.
Transparency logging
Ephemeral keys work well for the signing keys themselves, but there are other things that can be compromised:
The CA’s private key (practically, this cannot be ephemeral)
The OIDC provider’s private key (practically, this cannot be ephemeral)
The OIDC account credentials
These keys/credentials must be kept private, but in case of an accidental compromise, we need to have a way to detect misuse. In this situation, a transparency log (TL) can help.
A transparency log is an append-only tamperproof data store. When data is written to the log, a signed receipt is returned by the operator of the log, which can be used as proof that it is contained in the log. The log can also be monitored to check for suspicious activity.
We can use a transparency log to store all signatures and bundle the TL receipt with the signature. We can only accept a signature as valid if the signature is bundled with a valid TL receipt. Because a signature will only be valid if an entry is in the TL, any malicious party creating fake signatures will also have to publish an entry to the TL. The TL can be monitored by the signer, who can sound the alarm if they notice any signatures in the log they didn’t create (Figure 8). The log can also be monitored by concerned third parties to check for any signatures that don’t look right (Figure 9).
We can also use a transparency log to store certificates issued by the CA. A certificate will only be valid if it comes with a TL receipt. This is also how TLS certificates work — they will only be trusted by browsers if they have an attached TL receipt.
The TL receipts also contain a timestamp, so a TL can completely replace the role of the TSA while also providing extra functionality.
Figure 8: DOI builder sends the signed image and certificate from CA to the Transparency Log (TL), which appends the signature to the TL and returns a receipt for the current time. The monitor is able to observe that the signature was made by the DOI builder at a specific time.Figure 9: Example of a malicious party signing an image using a fake certificate they received from the CA using hacked OIDC credentials. Monitor is able to discern something is not quite right.
Similar attacks with a stolen private key and a legitimate certificate are also detectable in this way.
A summary of the signing status quo
Everything up to this point describes the status quo in artifact signing. Let’s pull together all of the components described so far to recap (Figure 10). These are:
OIDC provider, to verify the identity of some entity
Certificate authority, to issue certificates binding the identity to a public key
Signer, to sign an image with the corresponding private key
Transparency log (TL), to store signatures and return signed timestamped receipts
TUF repository, to distribute trust policy
Transparency log monitors, to detect malicious behavior
Registry, to store all of the artifacts
Client, to verify signatures on images
Figure 10: Building on all the previous figures, using OIDC the DOI builder identifies itself to GitHub Actions, which provides a token the DOI builder sends to the CA to verify its identity. The CA verifies the token with GitHub Actions and provides a certificate back to the DOI builder. DOI builder sends the signed image and certificate from CA to the Transparency Log (TL), which appends the signature to the TL and returns a receipt for the current time. DOI builder pushes the signed image, the certificate from the CA, and the TL receipt to the registry. The verifier is able to verify the signed image and that the image was created by the identity consistent with trust policy at a specific time. The monitor is able to observe that the signature was made by the DOI builder at a specific time.
The client verifying a signature needs to trust:
The CA
The TL
The OIDC provider (transitively, they need to trust that the CA verifies ID tokens from the OIDC provider correctly)
The signers of the TUF repository
There are many things to trust. Any of these entities being compromised or acting maliciously themselves will compromise the security of the system. Even if such a compromise can be detected by monitoring the transparency log, remediation can be difficult. Removing any of these points of trust without compromising the overall security of the solution would be an improvement.
Docker’s proposed signing solution
Before a CA issues a certificate, it needs to verify control of the private key and control of the identity. In Figure 10, the CA outsources the identity verification to an OIDC provider. We can already use the OIDC provider to verify the identity, but can we use it to verify control of the private key? It turns out that we can.
OpenPubkey is a protocol for binding OIDC identities to public keys. Full details of how it works can be found in the OpenPubkey paper, but below is a simplified explanation.
OIDC recommends a unique random number to be sent as part of the request to the OIDC provider. This number is called a nonce.
If the nonce is sent, the OIDC provider must return it in the signed JWT (JSON Web Token) called an ID token. We can use this to our advantage by constructing the nonce as a hash of the signer’s public key and some random noise (as the nonce still has to be random). The signer can then bundle the ID token from the OIDC provider with the public key and the random noise and sign the bundle with its private key.
The resulting token (called a PK token) proves control of the OIDC identity and control of the private key at a specific time, as long as a verifier trusts the OIDC provider. In other words, the PK token fulfills the same role as the certificate provided by the CA in all the signing flows up to this point, but does not require trust in a CA. This token can be distributed alongside signatures in the same way as a certificate.
OIDC ID tokens, however, are designed to be verified and discarded in a short timeframe. The public keys for verifying the tokens are available from an API endpoint hosted by the OIDC provider. These keys are rotated frequently (every few weeks or months), and there is currently no way to verify a token signed by a key that is no longer valid. Therefore, a log of historic keys will need to be used to verify PK tokens that were signed with OIDC provider keys that have been rotated out. This log is an additional point of trust for a verifier, so it may seem we’ve removed one point of trust (the CA) and replaced it with another (the log of public keys). For DOI, we have already added another point of trust with the TUF repository used to distribute trust policy. We can also use this TUF repository to distribute the log of public keys.
Figure 11: Using OIDC the DOI builder identifies itself to GitHub Actions, which provides an ID token that binds the OIDC identity to the public key. DOI builder sends the signed image and PK token to the Transparency Log (TL), which appends the signature and returns a receipt for the current time. DOI builder pushes the signed image, the PK token, and the TL receipt to the registry. The verifier is able to verify the signed image and that the image was created by the identity consistent with trust policy at a specific time. The monitor is able to observe that the signature was made by the DOI builder at a specific time.
OpenPubkey enhancements
As originally formulated, OpenPubkey was not designed to support code signing workflows as we’ve described. As a result, the implementation described here has a few drawbacks. In the following, we discuss each drawback and its associated solution.
OIDC ID tokens are bearer auth tokens
An OIDC ID token is a JWT signed by the OIDC provider that allows the bearer of the token to authenticate as the subject of the token. As we will be publishing these tokens publicly, it means a malicious party could take a valid ID token from the registry and present it to a service to identify as the subject of the ID token.
In theory, this should not be a problem because, according to the OIDC spec, any consumer must check the audience in the ID token before trusting the token (i.e., if the token is presented to Service Foo, Service Foo must check that the token was intended for Service Foo by checking the audience claim). However, there have been issues with OIDC client libraries not making this check.
To solve this issue, we can remove the OIDC provider’s signature from the ID token and replace it with a Guillou-Quisquater (GQ) signature. This GQ signature allows us to prove that we had the OIDC provider’s signature without sharing the signed token, and this proof can be verified using the OIDC provider’s public key and the rest of the ID token. More information on GQ signatures can be found in the original paper and in the OpenPubkey reference implementation. We’ve used a similar approach to one discussed in a paper by Zachary Newman.
OIDC ID tokens can contain personal information
For the case where OIDC ID tokens from CI systems such as GitHub Actions are used, it is unlikely that there is any personal information that could be leaked in the token. For example, the full data made available in a GitHub Actions OIDC ID token is documented on GitHub.
Some of this data, such as the repository name and the Git commit digest, are already included in the unsigned provenance attestations that the Docker build process generates. ID tokens representing human identities may include more personal data, but arguably, this is also the kind of data consumers may wish to verify as part of trust policy.
Key compromise
If the signer’s private key is compromised (admittedly unlikely as this is an ephemeral key), it is trivial for an attacker to sign any images and combine the signatures with the public PK token. As mentioned previously, the transparency log can help detect this kind of compromise, but we can go further and prevent it in the first place.
In the original OpenPubkey flow, we create the nonce from the signer’s public key and random noise, then use the corresponding private key to sign the image. If, however, we also include the hash of the image in the nonce, then the image, which we have already signed, is in effect also signed by the OIDC provider. This means the PK token becomes a one-use token that cannot be replayed to sign other images. Thus, compromising the ephemeral private key is no longer useful to an attacker.
OpenPubkey uses the nonce claim in the ID token
The full OIDC flow isn’t available on GitHub Actions. Instead, a simple HTTP endpoint is provided where a build process can request an ID token with an optional audience (aud) claim. We need to get the OIDC provider to sign some arbitrary data during authentication. We can do this by sending some data to the OIDC provider which will end up in one of the ID token claims, as long as we’re not preventing the claim’s intended use. Because GitHub Actions allows us to set the aud claim to an arbitrary value, we can use it for this purpose.
What’s next?
Docker aims to enable the broader open source community to improve security across the entire software supply chain. We feel strongly that good security requires good, easy-to-use tooling. Or, as Founder and CEO of Bounce Security Avi Douglen more eloquently put it, “Security at the expense of usability comes at the expense of security.”
The approach explained in this post aims to make signing container images as easy as possible without sacrificing security and trust. By simplifying the overall approach and eliminating complicated infrastructure requirements, our goal is to foster widespread adoption of container signing, in the same way we enabled the widespread adoption of Linux containers a decade ago.
Open source community and cryptography practitioners: Let us know what you think of this approach to signing. You can review the preliminary implementation across the various repositories in the OpenPubkey GitHub organization. Feel free to open issues in the various repositories or join the discussion in the OpenSSF community.
We look forward to hearing your feedback and working together to improve the security of the software supply chain!