Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Protecting the Software Supply Chain: The Art of Continuous Improvement

16 janvier 2025 à 13:04

Without continuous improvement in software security, you’re not standing still — you’re walking backward into oncoming traffic. Attack vectors multiply, evolve, and look for the weakest link in your software supply chain daily. 

Cybersecurity Ventures forecasts that the global cost of software supply chain attacks will reach nearly $138 billion by 2031, up from $60 billion in 2025 and $46 billion in 2023. A single overlooked vulnerability isn’t just a flaw; it’s an open invitation for compromise, potentially threatening your entire system. The cost of a breach doesn’t stop with your software — it extends to your reputation and customer trust, which are far harder to rebuild. 

Docker’s suite of products offers your team peace of mind. With tools like Docker Scout, you can expose vulnerabilities before they expose you. Continuous image analysis doesn’t just find the cracks; it empowers your teams to seal them from code to production. But Docker Scout is just the beginning. Tools like Docker Hub’s trusted content, Docker Official Images (DOI), Image Access Management (IAM), and Hardened Docker Desktop work together to secure every stage of your software supply chain. 

In this post, we’ll explore how these tools provide built-in security, governance, and visibility, helping your team innovate faster while staying protected. 

2400x1260 evergreen docker blog a

Securing the supply chain

Your software supply chain isn’t just an automated sequence of tools and processes. It’s a promise — to your customers, team, and future. Promises are fragile. The cracks can start to show with every dependency, third-party integration, and production push. Tools like Image Access Management help protect your supply chain by providing granular control over who can pull, share, or modify images, ensuring only trusted team members access sensitive assets. Meanwhile, Hardened Docker Desktop ensures developers work in a secure, tamper-proof environment, giving your team confidence that development is aligned with enterprise security standards. The solution isn’t to slow down or second-guess; it’s to continuously improve on securing your software supply chain, such as automated vulnerability scans and trusted content from Docker Hub.

A breach is more than a line item in the budget. Customers ask themselves, “If they couldn’t protect this, what else can’t they protect?” Downtime halts innovation as fines for compliance failures and engineering efforts re-route to forensic security analysis. The brand you spent years perfecting could be reduced to a cautionary tale. Regardless of how innovative your product is, it’s not trusted if it’s not secure. 

Organizations must stay prepared by regularly updating their security measures and embracing new technologies to outpace evolving threats. As highlighted in the article Rising Tide of Software Supply Chain Attacks: An Urgent Problem, software supply chain attacks are increasingly targeting critical points in development workflows, such as third-party dependencies and build environments. High-profile incidents like the SolarWinds attack have demonstrated how adversaries exploit trust relationships and weaknesses in widely used components to cause widespread damage. 

Preventing security problems from the start

Preventing attacks like the SolarWinds breach requires prioritizing code integrity and adopting secure software development practices. Tools like Docker Scout seamlessly integrate security into developers’ workflows, enabling proactive identification of vulnerabilities in dependencies and ensuring that trusted components form the backbone of your applications.

Docker Hub’s trusted content and Docker Scout’s policy evaluation features help ensure that your organization uses compliant and secure images. Docker Official Images (DOI) provide a robust foundation for deployments, mitigating risks from untrusted components. To extend this security foundation, Image Access Management allows teams to enforce image-sharing policies and restrict access to sensitive components, preventing accidental exposure or misuse. For local development, Hardened Docker Desktop ensures that developers operate in a secure, enterprise-grade environment, minimizing risks from the outset. This combination of tools enables your engineering team to put out fires and, more importantly, prevent them from starting in the first place.

Building guardrails

Governance isn’t a roadblock; it’s the blueprint for progress. The problem is that some companies treat security like a fire extinguisher — something you grab when things go wrong. That is not a viable strategy in the long run. Real innovation happens when security guardrails are so well-designed that they feel like open highways, empowering teams to move fast without compromising safety. 

A structured policy lifecycle loop — mapping connections, planning changes, deploying cleanly, and retiring the dead weight — turns governance into your competitive edge. Automate it, and you’re not just checking boxes; you’re giving your teams the freedom to move fast and trust the road ahead. 

Continuous improvement on security policy management doesn’t have to feel like a bureaucratic chokehold. Docker provides a streamlined workflow to secure your software supply chain effectively. Docker Scout integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and detailed reports and recommendations to help teams address issues before code reaches production. 

With the introduction of Docker Health Scores — a security grading system for container images — teams gain a clear and actionable snapshot of their image security posture. These scores empower developers to prioritize remediation efforts and continuously improve their software’s security from code to production.

Keeping up with continuous improvement

Security threats aren’t slowing down. New attack vectors and vulnerabilities grow every day. With cybercrime costs expected to rise from $9.22 trillion in 2024 to $13.82 trillion by 2028, organizations face a critical choice: adapt to this evolving threat landscape or risk falling behind, exposing themselves to escalating costs and reputational damage. Continuous improvement in software security isn’t a luxury. Building and maintaining trust with your customers is essential so they know that every fresh deployment is better than the one that came before. Otherwise, expect high costs due to imminent software supply chain attacks. 

Best practices for securing the software supply chain involve integrating vulnerability scans early in the development lifecycle, leveraging verified content from trusted sources, and implementing governance policies to ensure consistent compliance standards without manual intervention. Continuous monitoring of vulnerabilities and enforcing runtime policies help maintain security at scale, adapting to the dynamic nature of modern software ecosystems.

Start today

Securing your software supply chain is a journey of continuous improvement. With Docker’s tools, you can empower your teams to build and deploy software securely, ensuring vulnerabilities are addressed before they become liabilities.

Don’t wait until vulnerabilities turn into liabilities. Explore Docker Hub, Docker Scout, Hardened Docker Desktop, and Image Access Management to embed security into every stage of development. From granular control over image access to tamper-proof local environments, Docker’s suite of tools helps safeguard your innovation, protect your reputation, and empower your organization to thrive in a dynamic ecosystem.

Learn more

  • Docker Scout: Integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and actionable recommendations to address issues before they reach production.
  • Docker Health Scores: A security grading system for container images, offering teams clear insights into their image security posture.
  • Docker Hub: Access trusted, verified content, including Docker Official Images (DOI), to build secure and compliant software applications.
  • Docker Official Images (DOI): A curated set of high-quality images that provide a secure foundation for your containerized applications.
  • Image Access Management (IAM): Enforce image-sharing policies and restrict access to sensitive components, ensuring only trusted team members access critical assets.
  • Hardened Docker Desktop: A tamper-proof, enterprise-grade development environment that aligns with security standards to minimize risks from local development.

Why Secure Development Environments Are Essential for Modern Software Teams

2 janvier 2025 à 14:00

“You don’t want to think about security — until you have to.”

That’s what I’d tell you if I were being honest about the state of development at most organizations I have spoken to. Every business out there is chasing one thing: speed. Move faster. Innovate faster. Ship faster. To them, speed is survival. There’s something these companies are not seeing — a shadow. An unseen risk hiding behind every shortcut, every unchecked tool, and every corner cut in the name of “progress.”

Businesses are caught in a relentless sprint, chasing speed and progress at all costs. However, as Cal Newport reminds us in Slow Productivity, the race to do more — faster — often leads to chaos, inefficiency, and burnout. Newport’s philosophy calls for deliberate, focused work on fewer tasks with greater impact. This philosophy isn’t just about how individuals work — it’s about how businesses innovate. Development teams rushing to ship software often cut corners, creating vulnerabilities that ripple through the entire supply chain. 

2400x1260 docker evergreen logo blog B 1

The strategic risk: An unsecured development pipeline

Development environments are the foundation of your business. You may think they’re inherently secure because they’re internal. Foundations crumble when you don’t take care of them, and that crack doesn’t just swallow your software — it swallows established customer trust and reputation. That’s how it starts: a rogue tool here, an unpatched dependency there, a developer bypassing IT to do things “their way.” They’re not trying to ruin your business. They’re trying to get their jobs done. But sometimes you can’t stop a fire after it’s started. Shadow IT isn’t just inconvenient — it’s dangerous. It’s invisible, unmonitored, and unregulated. It’s the guy leaving the back door open in a neighborhood full of burglars.

You need control, isolation, and automation — not because they’re nice to have, but because you’re standing on a fault line without them. Docker gives you that control. Fine-grained, role-based access ensures that the only people touching your most critical resources are the ones you trust. Isolation through containerization keeps every piece of your pipeline sealed tight so vulnerabilities don’t spread. Automation takes care of the updates, the patch management, and the vulnerabilities before they become a problem. In other words, you don’t have to hope your foundation is solid — you’ll know it is.

Shadow IT: A growing concern

While securing official development environments is critical, shadow IT remains an insidious and hidden threat. Shadow IT refers to tools, systems, or environments implemented without explicit IT approval or oversight. In the pursuit of speed, developers may bypass formal processes to adopt tools they find convenient. However, this creates unseen vulnerabilities with far-reaching consequences.

In the pursuit of performative busywork, developers often take shortcuts, grabbing tools and spinning up environments outside the watchful eyes of IT. The intent may not be malicious; it’s just human nature. Here’s the catch: What you don’t see, you can’t protect. Shadow IT is like a crack in the dam: silent, invisible, and spreading. It lets unvetted tools and insecure code slip into your supply chain, infecting everything from development to production. Before you know it, that “quick fix” has turned into a legal nightmare, a compliance disaster, and a stain on your reputation. In industries like finance or healthcare, that stain doesn’t wash out quickly. 

A solution rooted in integration

The solution lies in a unified, secure approach to development environments that removes the need for shadow IT while fortifying the software supply chain. Docker addresses these vulnerabilities by embedding security directly into the development lifecycle. Our solution is built on three foundational principles: control, isolation, and automation.

  1. Control through role-based access management: Docker Hub establishes clear boundaries within development environments by enabling fine-grained, role-based access. You want to ensure that only authorized personnel can interact with sensitive resources, which will ideally minimize the risk of unintended or malicious actions. Docker also enables publishers to enforce role-based access controls, ensuring only authorized users can interact with development resources. It streamlines patch management through verified, up-to-date images. Docker Official Images and Docker Verified Publisher content are scanned with our in-house image analysis tool, Docker Scout. This helps find vulnerabilities before they can be exploited.
  2. Isolation through containerization: Docker’s value proposition centers on its containerization technology. By creating isolated development spaces, Docker prevents cross-environment contamination and ensures that applications and their dependencies remain secure throughout the development lifecycle.
  3. Automation for seamless security: Recognizing the need for speed in modern development cycles, Docker integrates recommendations with Scout through recommendations for software updates and patch management for CVEs. This ensures that environments remain secure against emerging threats without interrupting the flow of innovation.

Delivering tangible business outcomes

Businesses are always going to face this tension between speed and security, but the truth is you don’t have to choose. Docker gives you both. It’s not just a platform; it’s peace of mind. Because when your foundation is solid, you stop worrying about what could go wrong. You focus on what comes next.

Consider the example of a development team working on a high-stakes application feature. Without secure environments, a single oversight — such as an unregulated access point — can result in vulnerabilities that disrupt production and erode customer trust. By leveraging Docker’s integrated security solutions, the team mitigates these risks, enabling them to focus on value creation rather than crisis management.

Aligning innovation with security

As a previous post covers, securing the development pipeline is not simply deploying technical solutions but establishing trust across the entire software supply chain. With Docker Content Trust and image signing, organizations can ensure the integrity of software components at every stage, reducing the risk of third-party code introducing unseen vulnerabilities. By eliminating the chaos of shadow IT and creating a transparent, secure development process, businesses can mitigate risk without slowing the pace of innovation.

The tension between speed and security has long been a barrier to progress, but businesses can confidently pursue both with Docker. A secure development environment doesn’t just protect against breaches — it strengthens operational resilience, ensures regulatory compliance, and safeguards brand reputation. Docker empowers organizations to innovate on a solid foundation as unseen risks lurk within an organization’s fragmented tools and processes. 

Security isn’t a luxury. It’s the cost of doing business. If you care about growth, if you care about trust, if you care about what your brand stands for, then securing your development environments isn’t optional — it’s survival. Docker Business doesn’t just protect your pipeline; it turns it into a strategic advantage that lets you innovate boldly while keeping your foundation unshakable. Integrity isn’t something you hope for — it’s something you build.

Start today

Securing your software supply chain is a critical step in building resilience and driving sustained innovation. Docker offers the tools to create fortified development environments where your teams can operate at their best.

The question is not whether to secure your development pipeline — it’s how soon you can start. Explore Docker Hub and Scout today to transform your approach to innovation and security. In doing so, you position your organization to navigate the complexities of the modern development landscape with confidence and agility.

Learn more

Building Trust into Your Software with Verified Components

19 décembre 2024 à 13:55

Within software development, security and compliance are more than simple boxes to check. Each attestation and compliance check is backed by a well-considered risk assessment that aims to avoid ever-changing vulnerabilities and attack vectors. Software development teams don’t want to worry about vulnerabilities when they are focused on building something remarkable.

In this article, we explain how Docker Hub and Docker Scout can help development teams ensure a more secure and compliant software supply chain. 

2400x1260 security column 072024

Security starts with trusted foundations

Every structure needs a strong foundation. A weak base is where cracks begin to show. Using untrusted or outdated software is like building a skyscraper on sand, and security issues can derail progress, leading to costly fixes and delayed releases. By “shifting security left” — addressing vulnerabilities early in the development process — teams can avoid these setbacks down the road.  

Modern development demands a secure and compliant software supply chain. Unverified software or vulnerabilities buried deep within base images can become costly compliance issues, disrupting development timelines and eroding customer trust. One weak link in the supply chain can snowball into more significant issues, affecting product delivery and customer satisfaction. Without security and compliance checks, organizations will lack the credibility their customers rely on.

How Docker Hub and Scout help teams shift left

Software developers are like a construction crew building a skyscraper. The process requires specialized components — windows, elevators, wiring, concrete, and so on — which are found at a single supply depot and which work in harmony with each other. This idea is similar to microservices, which are pieced together to create modern applications. In this analogy, Docker Hub acts as the supply depot for a customer’s software supply chain, stocked with trusted container images that help developer teams streamline development.

Docker Hub is more than a container registry. It is the most widely trusted content distribution platform built on secure, verified, and dependable container images. Docker Official Images (DOI) and Docker Verified Publisher (DVP) programs provide a rock-solid base to help minimize risks and let development teams focus on creating their projects. 

Docker Hub simplifies supply chain security by ensuring developers start with trusted components. Its library of official and verified publisher images offers secure, up-to-date resources vetted for compliance and reliability, eliminating the risk of untrusted or outdated components.

Proactive risk management is critical to software development

To avoid breaking production environments, organizations need to plan ahead by catching and tracking common vulnerabilities and exposures (CVEs) early in the development process. Docker Scout enables proactive risk management by integrating security checks early in the development lifecycle. Scout reduces the likelihood of security incidents and streamlines the development process.

Additionally, Docker Scout Health Scores provide a straightforward framework for evaluating the security posture of container images used daily by development teams. Using an easy-to-understand alphabetical grading system (A to F), these scores assess CVEs in software components within Docker Hub. This feature lets developers quickly evaluate and select trusted content, ensuring a secure software supply chain.

Avoid shadow changes with IAM and RBAC for secure collaboration

Compliance is not glamorous, but it is essential to running a business. Development teams don’t want to have to worry about whether they are meeting industry standards — they want to know they are. Docker Hub makes compliance simple with pre-certified images and many features that take the guesswork out of governance. That means you can stay compliant while your teams keep growing and innovating.

The biggest challenge to scaling a team or growing your development operations is not about adding people — it’s about maintaining control without losing momentum. Tracking, reducing, and managing shadow changes means that your team does not lose the flow state in development velocity. 

Docker Hub’s Image Access Management (IAM) enforces precise permissions to ensure that only authorized people have access to modify sensitive information in repositories. Additionally, with role-based access control (RBAC), you’re not just delegating; you’re empowering your team with predefined roles that streamline onboarding, reduce mistakes, and keep everyone moving in harmony.

Docker Hub’s activity logs provide another layer of confidence as they let you track changes, enforce compliance, and build trust. These capabilities enhance security and boost collaboration by creating an environment where team members can focus on delivering high-quality applications.

Built-in trust

Without verified components, development teams can end up playing whack-a-mole with vulnerabilities. Time is lost. Money is spent. Trust is damaged. Now, picture a team working with trusted content and images that integrate security measures from the start. They deliver on time, on budget, and with confidence.

Building security into your applications doesn’t slow you down; it’s your superpower. Docker weaves trust and security into every part of your development process. Your applications are safeguarded, your delivery is accelerated, and your team is free to focus on what matters most — creating value.

Start your journey today. With Docker, you’re not just developing applications but building trust. Learn how trusted components help simplify compliance, enhance security, and empower your team to innovate fearlessly. 

Learn more

Enhancing Container Security with Docker Scout and Secure Repositories

Par : Jay Schmidt
25 novembre 2024 à 14:43

Docker Scout simplifies the integration with container image repositories, improving the efficiency of container image approval workflows without disrupting or replacing current processes. Positioned outside the repository’s stringent validation framework, Docker Scout serves as a proactive measure to significantly reduce the time needed for an image to gain approval. 

By shifting security checks left and integrating Docker Scout into the early stages of the development cycle, issues are identified and addressed directly on the developer’s machine.

2400x1260 generic scout blog d

Minimizing vulnerabilities 

This leftward shift in security accelerates the development process by keeping developers in flow, providing immediate feedback on policy violations at the point of development. As a result, images are secured and reviewed for compliance before being pushed into the continuous integration/continuous deployment (CI/CD) pipeline, reducing reliance on resource-heavy, consumption-based scans (Figure 1). By resolving issues earlier, Docker Scout minimizes the number of vulnerabilities detected during the CI/CD process, freeing up the security team to focus on higher-priority tasks.

Sample secure repo pipeline showing images are secured and reviewed for compliance before being pushed into the continuous integration/continuous deployment (CI/CD) pipeline, reducing reliance on resource-heavy, consumption-based scans.
Figure 1: Sample secure repository pipeline.

Additionally, the Docker Scout console allows the security team to define custom security policies and manage VEX (Vulnerability Exploitability eXchange) statements. VEX is a standard that allows vendors and other parties to communicate the exploitability status of vulnerabilities, allowing for the creation of justifications for including software that has been tied to Common Vulnerabilities and Exposures (CVE).

This feature enables seamless collaboration between development and security teams, ensuring that developers are working with up-to-date compliance guidelines. The Docker Scout console can also feed critical data into existing security tooling, enriching the organization’s security posture with more comprehensive insights and enhancing overall protection (Figure 2).

Sample secure repo pipeline with scout: The Docker Scout console can also feed critical data into existing security tooling, enriching the organization’s security posture with more comprehensive insights and enhancing overall protection.
Figure 2: Sample secure repository pipeline with Docker Scout.

How to secure image repositories

A secure container image repository provides digitally signed, OCI-compliant images that are rebuilt and rescanned nightly. These repositories are typically used in highly regulated or security-conscious environments, offering a wide range of container images, from open source software to commercial off-the-shelf (COTS) products. Each image in the repository undergoes rigorous security assessments to ensure compliance with strict security standards before being deployed in restricted or sensitive environments.

Key components of the repository include a hardened source code repository and an OCI-compliant registry (Figure 3). All images are continuously scanned for vulnerabilities, stored secrets, problematic code, and compliance with various standards. Each image is assigned a score upon rebuild, determining its compliance and suitability for use. Scanning reports and justifications for any potential issues are typically handled using the VEX format.

Key components of the repository include a hardened source code repository and an OCI-compliant registry
Figure 3: Key components of the repository include a hardened source code repository and an OCI-compliant registry.

Why use a hardened image repository?

A hardened image repository mitigates the security risks associated with deploying containers in sensitive or mission-critical environments. Traditional software deployment can expose organizations to vulnerabilities and misconfigurations that attackers can exploit. By enforcing a strict set of requirements for container images, the hardened image repository ensures that images meet the necessary security standards before deployment. Rebuilding and rescanning each image daily allows for continuous monitoring of new vulnerabilities and emerging attack vectors.

Using pre-vetted images from a hardened repository also streamlines the development process, reducing the load on development teams and enabling faster, safer deployment.

In addition to addressing security risks, the repository also ensures software supply chain security by incorporating software bills of materials (SBOMs) with each image. The SBOM of a container image can provide an inventory of all the components that were used to build the image, including operating system packages, application specific dependencies with its versions, and license information. By maintaining a robust vetting process, the repository guarantees that all software components are traceable, verifiable, and tamper-free — essential for ensuring the integrity and reliability of deployed software.

Who uses a hardened image repository?

The main users of a hardened container image repository include internal developers responsible for creating applications, developers working on utility images, and those responsible for building base images for other containerized applications. Note that the titles for these roles can vary by organization.

  • Application developers use the repository to ensure that the images their applications are built upon meet the required security and compliance standards.
  • DevOps engineers are responsible for building and maintaining the utility images that support various internal operations within the organization.
  • Platform developers create and maintain secure base images that other teams can use as a foundation for their containerized applications.

Daily builds

One challenge with using a hardened image repository is the time needed to approve images. Daily rebuilds are conducted to assess each image for vulnerabilities and policy violations, but issues can emerge, requiring developers to make repeated passes through the pipeline. Because rebuilds are typically done at night, this process can result in delays for development teams, as they must wait for the next rebuild cycle to resolve issues.

Enter Docker Scout

Integrating Docker Scout into the pre-submission phase can reduce the number of issues that enter the pipeline. This proactive approach helps speed up the submission and acceptance process, allowing development teams to catch issues before the nightly scans. 

Vulnerability detection and management

  • Requirement: Images must be free of known vulnerabilities at the time of submission to avoid delays in acceptance.
  • Docker Scout contribution:
    • Early detection: Docker Scout can scan Docker images during development to detect vulnerabilities early, allowing developers to resolve issues before submission.
    • Continuous analysis: Docker Scout continually reviews uploaded SBOMs, providing early warnings for new critical CVEs and ensuring issues are addressed outside of the nightly rebuild process.
    • Justification handling: Docker Scout supports VEX for handling exceptions. This can streamline the justification process, enabling developers to submit justifications for potential vulnerabilities more easily.

Security best practices and configuration management

  • Requirement: Images must follow security best practices and configuration guidelines, such as using secure base images and minimizing the attack surface.
  • Docker Scout contribution:
    • Security posture enhancement: Docker Scout allows teams to set policies that align with repository guidelines, checking for policy violations such as disallowed software or unapproved base images.

Compliance with dependency management

  • Requirement: All dependencies must be declared, and internet access during the build process is usually prohibited.
  • Docker Scout contribution:
    • Dependency scanning: Docker Scout identifies outdated or vulnerable libraries included in the image.
    • Automated reports: Docker Scout generates security reports for each dependency, which can be used to cross-check the repository’s own scanning results.

Documentation and provenance

  • Requirement: Images must include detailed documentation on their build process, dependencies, and configurations for auditing purposes.
  • Docker Scout contribution:
    • Documentation support: Docker Scout contributes to security documentation by providing data on the scanned image, which can be used as part of the official documentation submitted with the image.

Continuous compliance

  • Requirement: Even after an image is accepted into the repository, it must remain compliant with new security standards and vulnerability disclosures.
  • Docker Scout contribution:
    • Ongoing monitoring: Docker Scout continuously monitors images, identifying new vulnerabilities as they emerge, ensuring that images in the repository remain compliant with security policies.

By utilizing Docker Scout in these areas, developers can ensure their images meet the repository’s rigorous standards, thereby reducing the time and effort required for submission and review. This approach helps align development practices with organizational security objectives, enabling faster deployment of secure, compliant containers.

Integrating Docker Scout into the CI/CD pipeline

Integrating Docker Scout into an organization’s CI/CD pipeline can enhance image security from the development phase through to deployment. By incorporating Docker Scout into the CI/CD process, the organization can automate vulnerability scanning and policy checks before images are pushed into production, significantly reducing the risk of deploying insecure or non-compliant images.

  • Integration with build pipelines: During the build stage of the CI/CD pipeline, Docker Scout can be configured to automatically scan Docker images for vulnerabilities and adherence to security policies. The integration would typically involve adding a Docker Scout scan as a step in the build job, for example through a GitHub action. If Docker Scout detects any issues such as outdated dependencies, vulnerabilities, or policy violations, the build can be halted, and feedback is provided to developers immediately. This early detection helps resolve issues long before images are pushed to the hardened image repository.
  • Validation in the deployment pipeline: As images move from development to production, Docker Scout can be used to perform final validation checks. This step ensures that any security issues that might have arisen since the initial build have been addressed and that the image is compliant with the latest security policies. The deployment process can be gated based on Docker Scout’s reports, preventing insecure images from being deployed. Additionally, Docker Scout’s continuous analysis of SBOMs means that even after deployment, images can be monitored for new vulnerabilities or compliance issues, providing ongoing protection throughout the image lifecycle.

By embedding Docker Scout directly into the CI/CD pipeline (Figure 1), the organization can maintain a proactive approach to security, shifting left in the development process while ensuring that each image deployed is safe, compliant, and up-to-date.

Defense in depth and Docker Scout’s role

In any organization that values security, adopting a defense-in-depth strategy is essential. Defense in depth is a multi-layered approach to security, ensuring that if one layer of defense is compromised, additional safeguards are in place to prevent or mitigate the impact. This strategy is especially important in environments that handle sensitive data or mission-critical operations, where even a single vulnerability can have significant consequences.

Docker Scout plays a vital role in this defense-in-depth strategy by providing a proactive layer of security during the development process. Rather than relying solely on post-submission scans or production monitoring, Docker Scout integrates directly into the development and CI/CD workflows (Figure 2), allowing teams to catch and resolve security issues early. This early detection prevents issues from escalating into more significant risks later in the pipeline, reducing the burden on the SecOps team and speeding up the deployment process.

Furthermore, Docker Scout’s continuous monitoring capabilities mean that images are not only secure at the time of deployment but remain compliant with evolving security standards and new vulnerabilities that may arise after deployment. This ongoing vigilance forms a crucial layer in a defense-in-depth approach, ensuring that security is maintained throughout the entire lifecycle of the container image.

By integrating Docker Scout into the organization’s security processes, teams can build a more resilient, secure, and compliant software environment, ensuring that security is deeply embedded at every stage from development to deployment and beyond.

Learn more

Docker Scout Health Scores: Security Grading for Container Images in Your Docker Hub Repo

Par : Tazin Progga
30 juillet 2024 à 13:29

We are thrilled to introduce Docker Scout health scores, our latest feature designed to make software security simpler and more effective for developers. 

Developer-friendly software security

Docker Scout health scores rate the security and compliance status of container images within Docker Hub, providing a single, quantifiable metric to represent the “health” of an image. This feature addresses one of the key friction points in developer-led software security — the lack of security expertise — and makes it easier for developers to turn critical insights from tools into actionable steps.

banner How to Enhance Application Security Posture with Docker Scout Policies 2400x1260px

How Docker Scout health scores work

Docker Scout health scores utilize an alphabetical grading system to rate images stored in Hub repositories. The scores range from A to F, with A representing the highest overall standing and F the lowest. These health scores are calculated by evaluating images against a set of security and compliance checks based on widely accepted secure supply chain best practices. Factors considered include known vulnerabilities, risky licenses, Software Bill of Materials (SBOM) availability, provenance attestations, freshness of base image, and more. To learn more about these checks and the scoring process, visit our documentation.

Note: To maintain the privacy of these assessments, health scores can only be viewed by users who are members of the Docker Hub organization that owns an image repository and have at least “read” access to the repository.

The power of Docker Scout within Docker Hub

Health scores are powered by Docker Scout, our secure software supply chain tool that empowers organizations to strengthen their containerized application security posture via detailed analysis and insights across the software supply chain. Additionally, Docker Scout evaluates container images against detailed policies to ensure compliance with security and licensing standards.

By embedding Docker Scout’s powerful analysis capabilities into Docker Hub, health scores seamlessly fit into developers’ image lifecycle management workflows. Developers visiting hub.docker.com can leverage up-to-date and dependable assessments of their latest and historical images and take proactive measures to prioritize and improve images with lower scores. This capability is crucial for protecting containerized applications from potential security threats.

Figure 1 shows an example of an image with a low health score. The image was awarded a D score because it contains at least one known, high-profile CVE (think Log4Shell), is missing supply chain attestations (like SBOM and provenance), is using an out-of-date base image, and has specified a default root user.

Screenshot of Docker Scout health score rating of D, showing checks for high-profile vulnerabilities, supply chain attestations, unapproved base images, default non-root user, etc.
Figure 1: Sample image with a low health score.

Health scores in Docker Hub 

We’ve made it straightforward for developers to leverage health scores. Users can view them directly within the Docker Hub interface by navigating to their organization’s Repositories tab (Figure 2) or from the detailed view for any given repository (Figure 3). 

Screenshot of Docker Hub Repositories tab showing different health scores for various repositories.
Figure 2: Repositories tab — health scores per repository.
Screenshot of Docker Hub showing detailed repository view and Docker Scout health score rating.
Figure 3: Repositories details — health scores per tag.

For those seeking more in-depth analysis, enabling Docker Scout for a specific image repository offers easy access to detailed secure software supply chain insights and recommendations for how to address identified issues (Figure 4).

Screenshot of Docker Scout showing image details and compliance status for such things as as copyleft licenses, high-profile vulnerabilities, and outdated base images.
Figure 4: Image details from Docker Scout.

Proactive security through gamification

In addition to making convoluted secure supply chain insights easier to digest, health scores also introduce an element of gamification. Within our own teams at Docker, we are seeing them motivate developers to improve the container images for which they’re responsible. With the clear, quantifiable A to F metric, developers are taking the initiative to pursue higher scores through proactive steps. This process has fostered a culture of continuous improvement, where our developers are self-motivated to prioritize corrective actions and updates to achieve better scores, thus bolstering the security and compliance of our own portfolio.

Conclusion

By leveraging Docker Scout health scores, we aim to encourage organizations to take proactive steps towards better security and compliance management in their containerized environments and increase the overall resilience of their software supply chain. 

The feature is currently available as beta and rolled out to a limited number of organizations that have been selected to participate in the early access program. To try out health scores or to give feedback, reach out to our product team on social channels, such as X and Slack.

Learn more

From Misconceptions to Mastery: Enhancing Security and Transparency with Docker Official Images

4 avril 2024 à 14:01

Docker Official Images are a curated set of Docker repositories hosted on Docker Hub that provide a wide range of pre-configured images for popular language runtimes and frameworks, cloud-first utilities, data stores, and Linux distributions. These images are maintained and vetted, ensuring they meet best practices for security, usability, and versioning, making it easier for developers to deploy and run applications consistently across different environments.

Docker Official Images are an important component of Docker’s commitment to the security of both the software supply chain and open source software. Docker Official Images provide thousands of images you can use directly or as a base image when building your own images. For example, there are Docker Official Images for Alpine Linux, NGINX, Ubuntu, PostgreSQL, Python, and Node.js. Visit Docker Hub to search through the currently available Docker Official Images.

In this blog post, we address three common misconceptions about Docker Official Images and outline seven ways they help secure the software supply chain.

banner docker official images part1

3 common misconceptions about Docker Official Images

Even though Docker Official Images have been around for more than a decade and have been used billions of times, they are somewhat misunderstood. Who “owns” Docker Official Images? What is with all those tags? How should you use Docker Official Images? Let’s address some of the more common misconceptions.

Misconception 1: Docker Official Images are controlled by Docker

Docker Official Images are maintained through a partnership between upstream maintainers, community volunteers, and Docker engineers. External developers maintain the majority of Docker Official Images Dockerfiles, with Docker engineers providing insight and review to ensure best practices and uniformity across the Docker Official Images catalog. Additionally, Docker provides and maintains the Docker Official Images build infrastructure and logic, ensuring consistent and secure build environments that allow Docker Official Images to support more than 10 architecture/operating system combinations.

Misconception 2: Docker Official Images are designed for a single use case

Most Docker Official Images repositories offer several image variants and maintain multiple supported versions. In other words, the latest tag of a Docker Official Image might not be the right choice for your use case. 

Docker Official Images tags

The documentation for each Docker Official Images repository contains a “Supported tags and respective Dockerfile links” section that lists all the current tags with links to the Dockerfiles that created the image with those tags (Figure 1). This section can be a little intimidating for first-time users, but keeping in mind a few conventions will allow even novices to understand what image variants are available and, more importantly, which variant best fits their use case.

supported tags doi f1
Figure 1: Documentation showing the current tags with links to the Dockerfiles that created the image with those tags.
  • Tags listed on the same line all refer to the same underlying image. (Multiple tags can point to the same image.) For example, Figure 1 shows the ubuntu Docker Official Images repository, where the 20.04, focal-20240216, and focal tags all refer to the same image.
  • Often the latest tag for a Docker Official Images repository is optimized for ease of use and includes a wide variety of software helpful, but not strictly necessary, when using the main software packaged in the Docker Official Image. For example, latest images often include tools like Git and build tools. Because of their ease of use and wide applicability, latest images are often used in getting-started guides.
  • Some operating system and language runtime repositories offer “slim” variants that have fewer packages installed and are therefore smaller. For example, the python:3.12.2-bookworm image contains not only the Python runtime, but also any tool you might need to build and package your Python application — more than 570 packages! Compare this to the python:3.12.2-slim-bookworm image, which has about 150 packages.
  • Many Docker Official Images repositories offer “alpine” variants built on top of the Alpine Linux distribution rather than Debian or Ubuntu. Alpine Linux is focused on providing a small, simple, and secure base for container images, and Docker Official Images alpine variants typically aim to install only necessary packages. As a result, Docker Official Images alpine variants are typically even smaller than “slim” variants. For example, the linux/amd64 node:latest image is 382 MB, the node:slim image is 70 MB, and the node:alpine image is 47 MB.
  • If you see tags with words that look like Toy Story characters (for example, bookworm, bullseye, and trixie) or adjectives (such as jammy, focal, and bionic), those indicate the codename of the Linux distribution they use as a base image. Debian-release codenames are based on Toy Story characters, and Ubuntu releases use alliterative adjective-animal appellations. Linux distribution indicators are helpful because many Docker Official Images provide variants built upon multiple underlying distribution versions (for example, postgres:bookworm and postgres:bullseye).
  • Tags may contain other hints to the purpose of their image variant. Often these are explained later in the Docker Official Images repository documentation. Check the “How to use this image” and/or “Image Variants” sections.

Misconception 3: Docker Official Images do not follow software development best practices

Some critics argue that Docker Official Images go against the grain of best practices, such as not running container processes as root. While it’s true that we encourage users to embrace a few opinionated standards, we also recognize that different use cases require different approaches. For example, some use cases may require elevated privileges for their workloads, and we provide options for them to do so securely.

7 ways Docker Official Images help secure the software supply chain

We recognize that security is a continuous process, and we’re committed to providing the best possible experience for our users. Since the company’s inception in 2013, Docker has been a leader in the software supply chain, and our commitment to security — including open source security — has helped to protect developers from emerging threats all along the way.

With the availability of open source software, efficiently building powerful applications and services is easier than ever. The transparency of open source allows unprecedented insight into the security posture of the software you create. But to take advantage of the power and transparency of open source software, fully embracing software supply chain security is imperative. A few ways Docker Official Images help developers build a more secure software supply chain include:

  1. Open build process 

Because visibility is an important aspect of the software supply chain, Docker Official Images are created from a transparent and open build process. The Dockerfile inputs and build scripts are all open source, all Docker Official Images updates go through a public pull request process, and the logs from all Docker Official Images builds are available to inspect (Jenkins / GitHub Actions).

  1. Principle of least privilege

The Docker Official Images build system adheres strictly to the principle of least privilege (POLP), for example, by restricting writes for each architecture to architecture-specific build agents. 

  1. Updated build system 

Ensuring the security of Docker Official Images builds and images is paramount. The Docker Official Images build system is kept up to date through automated builds, regular security audits, collaboration with upstream projects, ongoing testing, and security patches. 

  1. Vulnerability reports and continuous monitoring

Courtesy of Docker Scout, vulnerability insights are available for all Docker Official Images and are continuously updated as new vulnerabilities are discovered. We are committed to continuously monitoring our images for security issues and addressing them promptly. For example, we were among the first to provide reasoned guidance and remediation for the recent xz supply chain attack. We also use insights and remediation guidance from Docker Scout, which surfaces actionable insights in near-real-time by updating CVE results from 20+ CVE databases every 20-60 minutes.

  1. Software Bill of Materials (SBOM) and provenance attestations 

We are committed to providing a complete and accurate SBOM and detailed build provenance as signed attestations for all Docker Official Images. This allows our users to have confidence in the origin of Docker Official Images and easily identify and mitigate any potential vulnerabilities.

  1. Signature validation 

We are working on integrating signature validation into our image pull and build processes. This will ensure that all Docker Official Images are verified before use, providing an additional layer of security for our users.

  1. Increased update frequency 

Docker Official Images provide the best of both worlds: the latest version of the software you want, built upon stable versions of Linux distributions. This allows you to use the latest features and fixes of the software you are running without having to wait for a new package from your Linux distribution or being forced to use an unstable version of your Linux distribution. Further, we are working to increase the throughput of the Docker Official Images build infrastructure to allow us to support more frequent updates for larger swaths of Docker Official Images. As part of this effort, we are piloting builds on GitHub Actions and Docker Build Cloud.

Conclusion

Docker’s leadership in security and protecting open source software has been established through Docker Official Images and other trusted content we provide our customers. We take a comprehensive approach to security, focusing on best practices, tooling, and community engagement, and we work closely with upstream projects and SIGs to address security issues promptly and proactively.

Docker Official Images provide a flexible and secure way for developers to build, ship, test, and run their applications. Docker Official Images are maintained through a partnership between the Docker Official Images community, upstream maintainers/volunteers, and Docker engineers, ensuring best practices and uniformity across the Docker Official Images catalog. Each Docker Official Image offers numerous image variants that cater to different use cases, with tags indicating the purpose of each variant. 

Developers can build using Docker tools and products with confidence, knowing that their applications are built on a secure, transparent foundation. 

Looking to dive in? Get started building with Docker Official Images today.

Learn more

OpenSSH and XZ/liblzma: A Nation-State Attack Was Thwarted, What Did We Learn?

1 avril 2024 à 19:05
Black padlock on light blue digital background

I have been recently watching The Americans, a decade-old TV series about undercover KGB agents living disguised as a normal American family in Reagan’s America in a paranoid period of the Cold War. I was not expecting this weekend to be reading mailing list posts of the same type of operation being performed on open source maintainers by agents with equally shadowy identities (CVE-2024-3094).

As The Grugq explains, “The JK-persona hounds Lasse (the maintainer) over multiple threads for many months. Fortunately for Lasse, his new friend and star developer is there, and even more fortunately, Jia Tan has the time available to help out with maintenance tasks. What luck! This is exactly the style of operation a HUMINT organization will run to get an agent in place. They will position someone and then create a crisis for the target, one which the agent is able to solve.”

The operation played out over two years, getting the agent in place, setting up the infrastructure for the attack, hiding it from various tools, and then rushing to get it into Linux distributions before some recent changes in systemd were shipped that would have stopped this attack from working.

An equally unlikely accident resulted when Andres Freund, a Postgres maintainer, discovered the attack before it had reached the vast majority of systems, from a probably accidental performance slowdown. Andres says, “I didn’t even notice it while logging in with SSH or such. I was doing some micro-benchmarking at the time and was looking to quiesce the system to reduce noise. Saw sshd processes were using a surprising amount of CPU, despite immediately failing because of wrong usernames etc. Profiled sshd. Which showed lots of cpu time in code with perf unable to attribute it to a symbol, with the dso showing as liblzma. Got suspicious. Then I recalled that I had seen an odd valgrind complaint in my automated testing of Postgres, a few weeks earlier, after some package updates were installed. Really required a lot of coincidences.” 

It is hard to overstate how lucky we were here, as there are no tools that will detect this vulnerability. Even ex-post it is not possible to detect externally as we do not have the private key needed to trigger the vulnerability, and the code is very well hidden. While Linus’s law has been stated as “given enough eyeballs all bugs are shallow,” we have seen in the past this is not always true, or there are just not enough eyeballs looking at all the code we consume, even if this time it worked.

In terms of immediate actions, the attack appears to have been targeted at subset of OpenSSH servers patched to integrate with systemd. Running SSH servers in containers is rare, and the initial priority should be container hosts, although as the issue was caught early it is likely that few people updated. There is a stream of fixes to liblzma, the xz compression library where the exploit was placed, as the commits from the last two years are examined, although at present there is no evidence that there are exploits for any software other than OpenSSH included. In the Docker Scout web interface you can search for “lzma” in package names, and issues will be flagged in the “high profile vulnerabilities” policy.

So many commentators have simple technical solutions, and so many vendors are using this to push their tools. As a technical community, we want there to be technical solutions to problems like this. Vendors want to sell their products after events like this, even though none even detected it. Rewrite it in Rust, shoot autotools, stop using GitHub tarballs, and checked-in artifacts, the list goes on. These are not bad things to do, and there is no doubt that understandability and clarity are valuable for security, although we often will trade them off for performance. It is the case that m4 and autotools are pretty hard to read and understand, while tools like ifunc allow dynamic dispatch even in a mostly static ecosystem. Large investments in the ecosystem to fix these issues would be worthwhile, but we know that attackers would simply find new vectors and weird machines. Equally, there are many naive suggestions about the people, as if having an identity for open source developers would solve a problem, when there are very genuine people who wish to stay private while state actors can easily find fake identities, or “just say no” to untrusted people. Beware of people bringing easy solutions, there are so many in this hot-take world.

Where can we go from here? Awareness and observability first. Hyper awareness even, as we see in this case small clues matter. Don’t focus on the exact details of this attack, which will be different next time, but think more generally. Start by understanding your organization’s software consumption, supply chain, and critical points. Ask what you should be funding to make it different. Then build in resilience. Defense in depth, and diversity — not a monoculture. OpenSSH will always be a target because it is so widespread, and the OpenBSD developers are doing great work and the target was upstream of them because of this. But we need a diverse ecosystem with multiple strong solutions, and as an organization you need second suppliers for critical software. The third critical piece of security in this era is recoverability. Planning for the scenario in which the worst case has happened and understanding the outcomes and recovery process is everyone’s homework now, and making sure you are prepared with tabletop exercises around zero days. 

This is an opportunity for all of us to continue working together to strengthen the open source supply chain, and to work on resilience for when this happens next. We encourage dialogue and discussion on this within Docker communities.

Learn more

Is Your Container Image Really Distroless?

27 mars 2024 à 13:25

Containerization helped drastically improve the security of applications by providing engineers with greater control over the runtime environment of their applications. However, a significant time investment is required to maintain the security posture of those applications, given the daily discovery of new vulnerabilities as well as regular releases of languages and frameworks. 

The concept of “distroless” images offers the promise of greatly reducing the time needed to keep applications secure by eliminating most of the software contained in typical container images. This approach also reduces the amount of time teams spend remediating vulnerabilities, allowing them to focus only on the software they are using. 

In this article, we explain what makes an image distroless, describe tools that make the creation of distroless images practical, and discuss whether distroless images live up to their potential.

2400x1260 is your image really distroless

What’s a distro?

A Linux distribution is a complete operating system built around the Linux kernel, comprising a package management system, GNU tools and libraries, additional software, and often a graphical user interface.

Common Linux distributions include Debian, Ubuntu, Arch Linux, Fedora, Red Hat Enterprise Linux, CentOS, and Alpine Linux (which is more common in the world of containers). These Linux distributions, like most Linux distros, treat security seriously, with teams working diligently to release frequent patches and updates to known vulnerabilities. A key challenge that all Linux distributions must face involves the usability/security dilemma. 

On its own, the Linux kernel is not very usable, so many utility commands are included in distributions to cover a large array of use cases. Having the right utilities included in the distribution without having to install additional packages greatly improves a distro’s usability. The downside of this increase in usability, however, is an increased attack surface area to keep up to date. 

A Linux distro must strike a balance between these two elements, and different distros have different approaches to doing so. A key aspect to keep in mind is that a distro that emphasizes usability is not “less secure” than one that does not emphasize usability. What it means is that the distro with more utility packages requires more effort from its users to keep it secure.

Multi-stage builds

Multi-stage builds allow developers to separate build-time dependencies from runtime ones. Developers can now start from a full-featured build image with all the necessary components installed, perform the necessary build step, and then copy only the result of those steps to a more minimal or even an empty image, called “scratch”. With this approach, there’s no need to clean up dependencies and, as an added bonus, the build stages are also cacheable, which can considerably reduce build time. 

The following example shows a Go program taking advantage of multi-stage builds. Because the Golang runtime is compiled into the binary, only the binary and root certificates need to be copied to the blank slate image.

FROM golang:1.21.5-alpine as build
WORKDIR /
COPY go.* .
RUN go mod download
COPY . .
RUN go build -o my-app


FROM scratch
COPY --from=build
  /etc/ssl/certs/ca-certificates.crt
  /etc/ssl/certs/ca-certificates.crt
COPY --from=build /my-app /usr/local/bin/my-app
ENTRYPOINT ["/usr/local/bin/my-app"]

BuildKit

BuildKit, the current engine used by docker build, helps developers create minimal images thanks to its extensible, pluggable architecture. It provides the ability to specify alternative frontends (with the default being the familiar Dockerfile) to abstract and hide the complexity of creating distroless images. These frontends can accept more streamlined and declarative inputs for builds and can produce images that contain only the software needed for the application to run. 

The following example shows the input for a frontend for creating Python applications called mopy by Julian Goede.

#syntax=cmdjulian/mopy
apiVersion: v1
python: 3.9.2
build-deps:
  - libopenblas-dev
  - gfortran
  - build-essential
envs:
  MYENV: envVar1
pip:
  - numpy==1.22
  - slycot
  - ./my_local_pip/
  - ./requirements.txt
labels:
  foo: bar
  fizz: ${mopy.sbom}
project: my-python-app/

So, is your image really distroless?

Thanks to new tools for creating container images like multi-stage builds and BuildKit, it is now a lot more practical to create images that only contain the required software and its runtime dependencies. 

However, many images claiming to be distroless still include a shell (usually Bash) and/or BusyBox, which provides many of the commands a Linux distribution does — including wget — that can leave containers vulnerable to Living off the land (LOTL) attacks. This raises the question, “Why would an image trying to be distroless still include key parts of a Linux distribution?” The answer typically involves container initialization. 

Developers often have to make their applications configurable to meet the needs of their users. Most of the time, those configurations are not known at build time so they need to be configured at run time. Often, these configurations are applied using shell initialization scripts, which in turn depend on common Linux utilities such as sed, grep, cp, etc. When this is the case, the shell and utilities are only needed for the first few seconds of the container’s lifetime. Luckily, there is a way to create true distroless images while still allowing initialization using tools available from most container orchestrators: init containers.

Init containers

In Kubernetes, an init container is a container that starts and must complete successfully before the primary container can start. By using a non-distroless container as an init container that shares a volume with the primary container, the runtime environment and application can be configured before the application starts. 

The lifetime of that init container is short (often just a couple seconds), and it typically doesn’t need to be exposed to the internet. Much like multi-stage builds allow developers to separate the build-time dependencies from the runtime dependencies, init containers allow developers to separate initialization dependencies from the execution dependencies. 

The concept of init container may be familiar if you are using relational databases, where an init container is often used to perform schema migration before a new version of an application is started.

Kubernetes example

Here are two examples of using init containers. First, using Kubernetes:

apiVersion: v1
kind: Pod
metadata:
  name: kubecon-postgress-pod
  labels:
    app.kubernetes.io/name: KubeConPostgress
spec:
  containers:
  - name: postgress
    image: laurentgoderre689/postgres-distroless
    securityContext:
      runAsUser: 70
      runAsGroup: 70
    volumeMounts:
    - name: db
      mountPath: /var/lib/postgresql/data/
  initContainers:
  - name: init-postgress
    image: postgres:alpine3.18
    env:
      - name: POSTGRES_PASSWORD
        valueFrom:
          secretKeyRef:
            name: kubecon-postgress-admin-pwd
            key: password
    command: ['docker-ensure-initdb.sh']
    volumeMounts:
    - name: db
      mountPath: /var/lib/postgresql/data/
  volumes:
  - name: db
    emptyDir: {}

- - - 

> kubectl apply -f pod.yml && kubectl get pods
pod/kubecon-postgress-pod created
NAME                    READY   STATUS     RESTARTS   AGE
kubecon-postgress-pod   0/1     Init:0/1   0          0s
> kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
kubecon-postgress-pod   1/1     Running   0          10s

Docker Compose example

The init container concept can also be emulated in Docker Compose for local development using service dependencies and conditions.

services:
 db:
   image: laurentgoderre689/postgres-distroless
   user: postgres
   volumes:
     - pgdata:/var/lib/postgresql/data/
   depends_on:
     db-init:
       condition: service_completed_successfully

 db-init:
   image: postgres:alpine3.18
   environment:
      POSTGRES_PASSWORD: example
   volumes:
     - pgdata:/var/lib/postgresql/data/
   user: postgres
    command: docker-ensure-initdb.sh

volumes:
 pgdata:

- - - 
> docker-compose up 
[+] Running 4/0
 ✔ Network compose_default      Created                                                                                                                      
 ✔ Volume "compose_pgdata"      Created                                                                                                                     
 ✔ Container compose-db-init-1  Created                                                                                                                      
 ✔ Container compose-db-1       Created                                                                                                                      
Attaching to db-1, db-init-1
db-init-1  | The files belonging to this database system will be owned by user "postgres".
db-init-1  | This user must also own the server process.
db-init-1  | 
db-init-1  | The database cluster will be initialized with locale "en_US.utf8".
db-init-1  | The default database encoding has accordingly been set to "UTF8".
db-init-1  | The default text search configuration will be set to "english".
db-init-1  | [...]
db-init-1 exited with code 0
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  starting PostgreSQL 16.1 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db-1       | 2024-02-23 14:59:33.194 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1       | 2024-02-23 14:59:33.196 UTC [9] LOG:  database system was shut down at 2024-02-23 14:59:32 UTC
db-1       | 2024-02-23 14:59:33.198 UTC [1] LOG:  database system is ready to accept connections

As demonstrated by the previous example, an init container can be used alongside a container to remove the need for general-purpose software and allow the creation of true distroless images. 

Conclusion

This article explained how Docker build tools allow for the separation of build-time dependencies from run-time dependencies to create “distroless” images. For example, using init containers allows developers to separate the logic needed to configure a runtime environment from the environment itself and provide a more secure container. This approach also helps teams focus their efforts on the software they use and find a better balance between security and usability.

Learn more

💾

Don't miss out! Join us at our next Flagship Conference: KubeCon + CloudNativeCon North America in Salt Lake City from November 12 - 15, 2024. Connect with o...

Announcing Docker Scout Software Supply Chain Solution for Open Source Projects

Par : Ben Cotton
25 janvier 2024 à 15:31

As we announced at DockerCon, we’re now providing a free Docker Scout Team subscription to all Docker-Sponsored Open Source (DSOS) program participants. 

If your open source project participates in the DSOS program, you can start using Docker Scout today. If your open source project is not in the Docker-Sponsored Open Source program, you can check the requirements and apply.

For other customers, Docker Scout is already generally available. Refer to the Docker Scout product page to learn more.

2400x1260 docker scout dsos projects announcement

Why use Docker Scout?

Docker Scout is a software supply chain solution designed to make it easier for developers to identify and fix supply chain issues before they hit production. 

To do this, Docker Scout:

  • Gives developers a centralized view of the tools they already use to see all the critical information they need across the software supply chain 
  • Makes clear recommendations on how to address those issues, including for security issues and opportunities to improve reliability efforts
  • Provides automation that highlights new defects, failures, or issues

Docker Scout allows you to prevent and address flaws where they start. By identifying issues earlier in the software development lifecycle and displaying information in Docker Desktop and the command line, Docker Scout reduces interruptions and rework.

Supply chain security is a big focus in software development, with attention from enterprises and governments. Software is complex, and when security, reliability, and stability issues arise, they’re often the result of an upstream library. So developers don’t just need to address issues in the software they write but also in the software their software uses.

These concerns apply just as much to open source projects as proprietary software. But the focus on improving the software supply chain results in an unfunded mandate for open source developers. A research study by the Linux Foundation found that almost 25% of respondents said the cost of security gaps was “high” or “very high.” Most open source projects don’t have the budget to address these gaps. With Docker Scout, we can reduce the burden on open source projects.

Conclusion

At Docker, we understand the importance of helping open source communities improve their software supply chain. We see this as a mutually beneficial relationship with the open source community. A well-managed supply chain doesn’t just help the projects that produce open source software; it helps downstream consumers through to the end user.

For more information, refer to the Docker Scout documentation.  

Learn more

💾

Learn how Docker Scout provides actionable insights into the software supply chain with real-time vulnerability identification, remediation recommendations, ...

How to Use OpenPubkey with GitHub Actions Workloads

21 décembre 2023 à 15:33

This post was contributed by Ethan Heilman, CTO at BastionZero.

OpenPubkey is the web’s new technology for adding public keys to standard single sign-on (SSO) interactions with identity providers that speak OpenID Connect (OIDC). OpenPubkey works by essentially turning an identity provider into a certificate authority (CA), which is a trusted entity that issues certificates that cryptographically bind an identity with a cryptographic public key. With OpenPubkey, any OIDC-speaking identity provider can bind public keys to identities today.

OpenPubkey is newly open-sourced through a collaboration of BastionZero, Docker, and the Linux Foundation. We’d love for you to try it out, contribute, and build your own use cases on it. You can check out the OpenPubkey repository on GitHub.

In this article, our goal is to show you how to use OpenPubkey to bind public keys to workload identities. We’ll concentrate on GitHub Actions workloads, because this is what is currently supported by the OpenPubkey open source project. We’ll also briefly cover how Docker is using OpenPubkey with GitHub Actions to sign in-toto attestations on Docker Official Images and improve supply chain security. 

Dark blue text on light blue background reading OpenPubkey with Docker logo

What’s an ID token?

Before we start, let’s review the OpenID Connect protocol. Identity providers that speak OIDC are usually called OpenID Providers, but we will just call them OPs in this article. 

OIDC has an important artifact called an ID token. A user obtains an ID token after they complete their single sign-on to their OP. They can then present the ID token to a third-party service to prove that they have properly been authenticated by their OP.  

The ID token includes the user’s identity (such as their email address) and is cryptographically signed by the OP. The third-party service can validate the ID token by querying the OP’s JSON Web Key Set (JWKS) endpoint, obtaining the OP’s public key, and then using the OP’s public key to validate the signature on the ID token. The OP’s public key is available by querying a JWKS endpoint hosted by the OP. 

How do GitHub Actions obtain ID tokens? 

So far, we’ve been talking about human identities (such as email addresses) and how they are used with ID tokens. But, our focus in this article is on workload identities. It turns out that Actions has a nice way to assign ID tokens to GitHub Actions.   

Here’s how it works. GitHub runs an OpenID Provider. When a new GitHub Action is spun up, GitHub first assigns it a fresh API key and secret. The GitHub Action can then use its API key and secret to authenticate to GitHub’s OP. GitHub’s OP can validate this API key and secret (because it knows that it was assigned to the new GitHub Action) and then provide the GitHub Action with an OIDC ID token. This GitHub Action can now use this ID token to identify itself to third-party services.

When interacting with GitHub’s OP,  Docker uses the job_workflow_ref claim in the ID token as the workflow’s “identity.” This claim identifies the location of the file that the GitHub Action is built from, so it allows the verifier to identify the file that generated the workflow and thus also understand and check the validity of the workflow itself. Here’s an example of how the claim could be set:

job_workflow_ref = octo-org/octo-automation/.github/workflows/oidc.yml@refs/heads/main

Other claims in the ID tokens issued by GitHub’s OP can be useful in other use cases. For example, there is a field called Actor or ActorID, which is the identity of the person who kicked off the GitHub Action. This could be useful for checking that workload was kicked off by a specific person. (It’s less useful when the workload was started by an automated process.)

GitHub’s OP supports many other useful fields in the ID token. You can learn more about them in the GitHub OIDC documentation.

Creating a PK token for workloads

Now that we’ve seen how to identify workloads using GitHub’s OP, we will see how to bind that workload identity to its public key with OpenPubkey. OpenPubKey does this with a cryptographic object called the PK token. 

To understand how this process works, let’s go back and look at how GitHub’s OP implements the OIDC protocol. The ID tokens generated by GitHub’s OP have a field called audience. Importantly, unlike user identity in workload identity, the audience field is chosen by the OIDC client that requests the ID token. When GitHub’s OP creates the ID token, it includes the audience along with the other fields (like job_workflow_ref and actor) that the OP signs when it creates the ID token.

So, in OpenPubkey, the GitHub Action workload runs an OpenPubkey client that first generates a new public-private key pair. Then, when the workload authenticates to GitHub’s OP with OIDC, it sets the audience field equal to the cryptographic hash of the workload’s public key along with some random noise. 

Now, the ID token contains the GitHub OP’s signature on the workload’s identity (the job_workflow_ref field and other relevant fields) and on the hash of the workload’s public key. This is most of what we need to have GitHub’s OP bind the workload’s identity and public key.

In fact, the PK token is a JSON Web Signature (JWS) which roughly consists of:

  • The ID token, including the audience field, which contains a hash of the workload’s public key.
  • The workload’s public key.
  • The random noise used to compute the hash of the workload’s public key.
  • A signature, under the workload’s public key, of all the information in the PK token. (This signature acts as a cryptographic proof that the user has access to the user-held secret signing key that is certified in the PK token.)

The PK token can then be presented to any OpenPubkey verifier, which uses OIDC to obtain the GitHub OP’s public key from its JWKS end. The verifier then verifies the ID token using the GitHub OP public key and then verifies the rest of the other fields in the PK token using the workload’s public key. Now the verifier knows the public key of the workload (as identified by its job_workflow_ref or other fields in the ID token) and can use this public key for whatever cryptography it wants to do.

Can you use ephemeral keys with OpenPubkey?

Yes! An ephemeral key is a key that is only used for a short period of time. Ephemeral keys are nice because there is no need for long-term management of the private key; it can be deleted when it is no longer needed, which improves security and reduces operational overhead.

Here’s how to do this with OpenPubkey. You choose a public-private key pair, authenticate to the OP to obtain a PK token for the public key, sign your object using the private key, and finally throw away the private key.   

One-time-use PK token 

We can take this a step further and ensure the PK token may only be associated with a single signed object. Here’s how it works. To start, we take a hash of the object to be signed. Then, when the workload authenticates to GitHub’s OP,  set the audience claim to equal to the cryptographic hash of the following items:

  • The public key 
  • The hash of the object to be signed
  • Some random noise

Finally, OpenPubkey verifier obtains the signed object and its one-time-use PK token, and then validates the PK token by additionally checking that the hash of the signed object is included in the audience claim. Now, you have a one-time-use PK token. You can learn more about this feature of OpenPubkey in the repo.

How will Docker use OpenPubkey to sign Docker Official Images?

Docker will be using OpenPubkey with GitHub Actions workloads to sign in-toto attestations on Docker Official Images. Docker Official Images will be created using a GitHub Action workload. The workload creates a fresh ephemeral public-private key pair, obtains the PK token for the public key via OpenPubkey, and finally signs attestations on the image using the private key.  

The private key is then deleted, and the image, its signature, and the PK token will be made available on the Docker Hub container registry. This approach is nice because it doesn’t require the signer to maintain or store the private key.

Docker’s container signing use case also relies heavily on The Update Framework (TUF), another Linux Foundation open source project. Read “Signing Docker Official Images Using OpenPubkey” for more details on how it all works.

What else can you do with OpenPubkey and GitHub Actions workloads?

Check out the following ideas on how to put OpenPubkey and GitHub Actions to work for you.

Signing private artifacts with a one-time key 

Consider signing artifacts that will be stored in a private repository. You can use OpenPubkey if you want to have a GitHub Action cryptographically sign an artifact using a one-time-use key. A nice thing about this approach is that it doesn’t require you to expose information in a public repository or transparency log. Instead, you need to post the artifact, its signature, and its PK token in the private repository. This capability is useful for private code repositories or internal build systems where you don’t want to reveal to the world what is being built, by whom, when, or how frequently. 

If relevant, you could also consider using the actor and actor-ID claim to bind the human who builds a particular artifact to the signed artifact itself. 

Authenticating workload-to-workload communication

Suppose you want one workload (call it Bob) to process an artifact created by another workload (call it Alice). If the Alice workload is a GitHub Action, the artifact it creates could be signed using OpenPubkey and passed on to the Bob workload, which uses an OpenPubkey verifier to verify it using the GitHub OP’s public key (which it would obtain from the GitHub OP’s JWKS url).  This approach might be useful in a multi-stage CI/CD process.

And other things, too! 

These are just strawman ideas. The whole point of this post is for you to try out OpenPubkey, contribute, and build your own use cases on it.

Other technical issues we need to think about

Before we wrap up, we need to discuss a few technical questions.

Aren’t ID tokens supposed to remain private?

You might worry about applications of OpenPubkey where the ID token is broadly exposed to the public inside the PK token. For example, in Docker Official Image signing use case, the PK tokens are made available to the public in the Docker Hub container registry. If the ID token is broadly exposed to the public, there is a risk that the ID token could be replayed and used for unauthorized access to other services.  

For this reason, we have a slightly different PK token for applications where the PK token is made broadly available to the public. 

For those applications, OpenPubkey strips the OP’s signature from the ID token before including the PK token. The OP’s signature is replaced with a Guillou-Quisquater (GQ) non-interactive proof-of-knowledge for an RSA signature (which is also known as a “GQ signature”). Now, the ID Token cannot be replayed against other services because the OP’s signature is removed, but the security of the OP’s RSA signature is maintained by the GQ Signature.   

So, in applications where the PK token must be broadly exposed to the public, the PK token is a JSON Web Signature, which consists of:

  • The ID token excluding the OP’s signature
  • A GQ signature on the ID token
  • The user’s public key
  • The random noise used to compute the hash of the user’s public key
  • A signature, under the user’s public key, of all the information in the PK token 

The GQ signature allows the client to prove that the ID token was validly signed by the OP, without revealing the OP’s signature. The OpenPubkey client generates the GQ signature to cryptographically prove that the client knows the OP’s signature on the ID token, while still keeping the OP’s signature secret. GQ signatures only work with RSA, but this is fine because every OpenID Connect provider is required to support RSA.

Because GQ signatures are larger and slower than regular signatures, we recommend using them only for use cases where the PK token must be made broadly available to the public. BastionZero’s infrastructure access use case does not use GQ signatures because it does not require the PK token to be made public. Instead, the user only exposes their PK token to the target (e.g., server, container, cluster, database) that they want to access; this is the same way an ID token is usually exposed with OpenID Connect.   

GQ signatures might not be necessary when authenticating workload-to-workload communications; if the Alice workload is passing the signed artifact and its PK token to the Bob workload only, there is less of a concern as the PK token is broadly available to the public.

What happens when the OP rotates its OpenID Connect key? 

OPs have OpenID Connect signing keys that change over time (e.g., every two weeks). What happens if we need to use a PK token after the OP rotates OpenID Connect key that signed the PK Token? 

For some use cases, the lifetime of a PK token is typically short. With BastionZero’s infrastructure access use case, for instance, a PK token will not be used for longer than 24 hours. In this use case, these timing problems are solved by (1) having user re-authenticate to the IdP and create a new PK token whenever the IdP rotates it’s key, and (2) having the OpenPubkey verifier check that the client also has a valid OIDC Refresh token along with the PK token whenever the ID token expires.

For some use cases, the PK token has a long life, so we do need to worry about OP rotating their OpenID Connect keys. With Docker’s attestation signing use case, this problem is solved by having TUF additionally store a historical log of the OP’s signing key. Anyone can keep a historical log of the OP public keys for use after they expire. In fact, we envision a future where OP’s might keep this historical log themselves. 

That’s it for now! You can check out the OpenPubkey repo on GitHub. We’d love for you to join the project, contribute, and identify other use cases where OpenPubkey might be useful.

Learn more

Achieve Security and Compliance Goals with Policy Guardrails in Docker Scout

Par : Tazin Progga
9 novembre 2023 à 17:00

At DockerCon 2023, we announced the General Availability (GA) of Docker Scout. We built Docker Scout for modern application teams, to help developers navigate the complexities and challenges of the software supply chain through actionable insights. 

The Scout GA release introduced several new capabilities, including a policy-driven evaluation mechanism, aka guardrails, that helps developers prioritize their insights to better align their work with organizational standards and industry best practices. 

In this article, we will walk through how Docker Scout policies enable teams to identify, prioritize, and fix their software quality issues at the point of creation — the developer inner loop (i.e., local development, building, and testing) — so that they can meet their organization’s security and reliability standards without compromising their speed of execution and innovation. 

Docker Scout logo with shield and checkmark on black background

Prioritizing problems

When implementing software supply chain tools and processes, organizations often encounter a daunting wall of issues in their software. The sheer volume of these issues (ranging from vulnerabilities in code to malicious third-party dependencies, compromised build systems, and more) makes it difficult for development teams to balance shipping new features and improving their product. In such situations, policies play a crucial role in helping developers prioritize which problems to fix first by providing clear guidelines and criteria for resolution. 

Docker Scout’s out-of-the-box policies align with software supply chain best practices to maintain up-to-date base images, remove high-risk vulnerabilities, check for undesirable licenses, and look for other issues to help organizations maintain the quality of the artifacts they’re building or consuming (Figure 1). 

Screenshot of Docker policy results, showing boxes for separate categories including All critical vulnerabilities, Base images not updated, Critical and high vulnerabilities with fixes, and High profile CVEs.
Figure 1: A summary of available policies in Docker Scout.

These policies bring developers critical insights about their container images and enable them to focus on prioritizing new issues as they come in and to identify which pre-existing issues require their attention. In fact, developers can get these insights right from their local machine, where it is much faster and less expensive to iterate than later in the supply chain, such as in CI, or even later in production (Figure 2).

Screenshot showing details of analyzed image, with "policy failed" result.
Figure 2: Policy evaluation results in CLI.

Make things better

Docker Scout also adopts a more pragmatic and flexible approach when it comes to policy. Traditional policy solutions typically follow a binary pass/fail evaluation model that imposes rigid, one-size-fits-all targets, like mandating “fewer than 50 vulnerabilities” where failure is absolute. Such an approach overlooks nuanced situations or intermediate states, which can cause friction with developer workflows and become a main impediment to successful adoption of policies. 

In contrast, Docker Scout’s philosophy revolves around a simple premise: Make things better.” This premise means the first step in every release is not to get developers to zero issues but to prevent regression. Our approach acknowledges that although projects with complex, extensive codebases have existing quality gaps, it is counterproductive to place undue pressure on developers to fix everything, everywhere, all at once.

By using Docker Scout, developers can easily track what has worsened in their latest builds (from the website, the CLI and CI pipelines) and only improve the issues relevant to their policies (Figures 3 and 4).

Screenshot of Docker policies showing improved items in table with Analyzed, Comparison, and Change columns.
Figure 3: Outcomes driven by Docker Scout Policy.
Screenshot of Docker policies results showing changes with 2 improved, 0 worsened, and 1 missing data.
Figure 4: Pull Request diff from the Scout GitHub Action.

But, finding and prioritizing the right problems is only half of the effort. For devs to truly “make things better,” the second step they must take is toward fixing these issues. According to a recent survey of 500 developers conducted by GitHub, the primary areas where development teams spend most of their time include writing code (32%) and identifying and addressing security vulnerabilities (31%). This is far from ideal, as it means that developers are spending less time driving innovation and user value. 

With Docker Scout, we aim to address this challenge head-on by providing developers access to automated, in-context remediation guidance (Figure 5). By actively suggesting upgrade and remediation paths, Docker Scout helps to bring teams’ container images back in line with policies, reducing their mean time to repair (MTTR) and freeing up more of their time to create value.

 Screenshot of Docker Scout policies showing recommendations such as Major OS version update.
Figure 5: Example scenario for the ‘Base images not up to date’ policy.

While Docker Scout initially helps teams prioritize the direction of improvement, once all the existing critical software issues have been effectively addressed, developers can transition to employing the policies to achieve full compliance. This process ensures that going forward, all container images are void of the specific issues deemed vital to their organization’s code quality, compliance, and security goals. 

The Docker Scout team is excited to help our customers build software that meets the highest standards of safety, efficiency, and quality in a rapidly evolving ecosystem within the software supply chain. To get started with Docker Scout, visit our product page today.

Learn more

Announcing Docker Scout GA: Actionable Insights for the Software Supply Chain

Par : Docker Team
4 octobre 2023 à 15:50

We are excited to announce that Docker Scout General Availability (GA) now allows developers to continuously evaluate container images against a set of out-of-the-box policies, aligned with software supply chain best practices. These new capabilities also include a full suite of integrations enabling you to attain visibility from development into production. These updates strengthen Docker Scout’s position as integral to the software supply chain. 

banner docker scout general availability

For developers building modern cloud-native applications, Docker Scout provides actionable insights in real-time to make it simple to secure and manage their software supply chain end-to-end. Docker Scout meets developers in the Docker tools they use daily, and provides the richest library of open source trusted content, policy evaluation, and automated remediation in real-time across the entire software supply chain. From the first base image pulled all the way to git commit, CI pipeline, and deployed workloads in production, Docker Scout leverages its large open ecosystem and the container runtime as vehicles for insights developers can easily act upon.

With Docker Scout operating as the system of record for the software supply chain, developers have access to real-time insights, identification of anomalies in their applications, and automated recommendations to improve application integrity, in tandem with Docker Desktop, Docker CLI, Docker Hub, and the full suite of Docker products. Docker Scout is designed to see the bigger picture and address challenges that are layered into the software supply chain.

Software supply chain

Through many in-depth conversations with our customers, we uncovered a clear trend: Developers are increasingly aware about not consuming content they don’t trust or haven’t permitted within their accepted range of organizational policies. To solve for this, Docker provides Docker Official Images, a curated set of Docker repositories hosted on Docker Hub. These images provide essential base repositories that serve as the starting point for most users. Docker and Docker Scout will continue to provide additional forms of software supply chain metadata for the Docker Official Image catalog in the coming months.

Trusted content is the foundation of secure software applications. A key aspect of this foundation is Docker Hub, the largest and most-used source of secure software artifacts, which includes Docker Official Images, Docker Verified Publishers, and Docker-Sponsored Open Source trusted content. Docker Scout policies leverage this metadata to track the life cycle of images, generate unique insights for developers, and help customers automate the enhancement of their software supply chain objectives — from inner loop to production.

Ensuring the reliability of applications requires constant vigilance and in-depth software design considerations, from selecting dependencies with a JSON file, running compilations and associated tests, to ensuring the safety of every image built. While some monitoring platforms focus solely on policies for container images currently running in production, the new Docker Scout GA release recognizes this is not sufficient to oversee the entire software supply chain, since that approach occurs too late in the development process.

Docker Scout offers a seamless set of actionable insights and suggested workflows that meet developers where they build and monitor today. These insights and workflows are particularly helpful for developers building Docker container images based on Docker Trusted Content (Docker Official Images, Docker-Sponsored Open Source, Docker Verified Publishers), and for many data sources beyond that, including data sourced from integrations with JFrog Artifactory, Amazon ECR, Sysdig Runtime Monitoring, GitHub Actions, GitLab, and CircleCI (Figure 1).

Docker provides Docker Official Images, a curated set of Docker repositories hosted on Docker Hub. These images provide essential base repositories that serve as the starting point for most users.
Figure 1: Integrate Docker Scout with tools you already use.

Policy evaluation

Current policy solutions support only a binary outcome, where artifacts are either allowed or denied based on the oversimplified results from policy analysis. This pass/fail approach and the corresponding gatekeeping is often too rigid and does not consider nuanced situations or intermediate states, leading to day-to-day friction around implementing traditional policy enforcement. Docker Scout now goes deeper, not only by indicating more subtle deviations from policy, but also by actively suggesting upgrade and remediation paths to bring you back within policy guidelines, reducing MTTR in the process.

Additionally, Docker Scout Policy makes for a productivity boost that stems from not having to wait for CI/CD to finish to know if you need to upgrade dependencies. Evaluating policies with Docker Scout prevents last-minute security blockers in the CI/CD that can impact release dates, which is one of the primary reasons teams tend not to adopt policy evaluation solutions.

Policy can take many forms: Specifying which repos or sources it is safe to pull in components or libraries from, requiring secure and verified build processes, and continuously monitoring for issues after deployment. Docker Scout is now equipped to expand the types of policies that can be implemented within these wider sets of definitions (Figures 2 & 3).

docker scout ga f2
Figure 2: Docker Scout dashboard showing policy violations with fixes.
docker scout ga f3
Figure 3: Overview of the security status of images across all Docker-enabled repos over time.

Policy for maximal coverage

Our vision for policy is to allow developers to define what’s most developer-friendly and secure for their needs and their environments. While there’s a common thread across some industries, every enterprise has nuances. Whether setting security policies or aligning to software development life cycle tooling best practices, Docker Scout’s goal is to continuously evaluate container images, assisting teams to incrementally improve security posture within their cloud infrastructure.

These new capabilities include built-in out-of-the-box policies to maintain up-to-date base images, track associated vulnerabilities, and monitor for relevant in-scope licenses. Through evaluation results, users can see the status of policies for each image. Users will have an aggregated view of their policy results so they know what to focus on, as well as the ability to evaluate those results in more detail to understand what has changed within a given set of policies in a repo list view (Figure 4).

Policy status: failed (3/4 policies violated)
Figure 4: View from the CLI.

What’s next?

Docker Scout is designed to be with you in every step of improving developer workflows — from helping developers understand which actions to take to improve code reliability and bring it back in line with policy, to ensuring optimal code performance.

With the next phase on the horizon, we’re gearing up to deliver more value to our customers and always welcome feedback through our Docker Scout Design Partner Program. For now, the Docker Scout team is excited to get the latest solutions into our customers’ hands, ensuring safety, efficiency, and quality in a rapidly evolving ecosystem within the software supply chain.

Developers in our Docker-Sponsored Open Source (DSOS) Program will soon be able to access our Docker Scout Team plan, which includes unlimited local image analysis, as well as up to 100 repos for remote images, SDLC integrations, security posture reporting, and policy evaluation. Once Docker Scout is enabled for the DSOS program in late 2023, DSOS members can enable it on up to 100 repositories within their DSOS-approved namespace.

To learn more about Docker Scout, visit the Docker Scout product page.

Learn more

5 Benefits of a Container-First Approach to Software Development

Par : Ben Cotton
14 août 2023 à 14:00

Cargo containers completely transformed the shipping industry and enabled the global commerce experience of today. Similarly, software containers simplify application development and deployment, which helps enable the cloud-native software architecture that powers the modern technology we all rely on. Although you can get benefits from containerizing your applications after the fact, you get the most value when you take a container-first approach. 

In this blog post, we discuss insights from Cracking the Code: Effectively Managing All of Those Applications, highlighting five benefits of embracing a container-first approach to software development.

banner cracking the code effectively managing all of those applications
  1. Consistent and reliable software performance

Inconsistency can be a major roadblock to progress. The all too familiar frustration of “it works on my machine” can cause software delivery delays and hinders collaboration. But with containers comes standardization. This ensures that software will perform consistently across the entire development process, regardless of the underlying environment.

Developers and infrastructure engineers also save a lot of time and cognitive energy on configuring and maintaining their environments and workstations. Containers have a small resource footprint, which means your infrastructure can do more with less. And, because each container includes the exact versions of software it needs, you don’t have to worry about conflicting dependencies.

  1. Fewer bugs

Bugs are the bane of every software developer’s existence. However, a container-first approach provides environmental parity. This means that the development, staging, and production environments remain consistent, reducing the likelihood of encountering bugs caused by disparities in underlying conditions. With containers, businesses can significantly reduce debugging time and enhance the overall quality of their software, leading to higher user satisfaction and a stronger competitive edge.

  1. Faster developer onboarding

The learning curve for new developers can often be steep, especially when dealing with complex software environments. Containers revolutionize developer onboarding by providing a replica of the exact environment in which an application will be tested and executed. This is irrespective of the developer’s local operating system or installed libraries. With containers, developers can hit the ground running, accelerating their productivity and contributing to the project’s success from day one.

  1. A more secure supply chain

The Consortium for Information & Software Quality estimates that poor software quality has cost the United States economy $2.41 trillion. Two of the top causes are criminals exploiting vulnerabilities and supply chain problems with third-party software. Containers can help.

Because the Dockerfile is a recipe for creating the container, you can use it to produce a software bill of materials (SBOM). This makes clear what dependencies — including the specific version — go into building the container. Cryptographically signed SBOMs let you verify the provenance of your dependencies, so you can be sure that the upstream library you’re using is the actual one produced by the project.

Using the SBOM, you can also monitor your fleet of containers for known vulnerabilities. When a new vulnerability is discovered, you can quickly tell which of your containers are affected, which makes the response quicker. Containers also provide isolation, micro-segmentation, and other zero-trust techniques, which reduce your attack surface and limit the impact of exploited vulnerabilities.

  1. Improved productivity for faster time-to-market

The standardization, consistency, and security containers bring directly impact software delivery time. With fewer issues to deal with (bugs, compatibility issues, maintenance, etc.), developers can focus on more meaningful tasks and ultimately deliver solutions to customers faster. All of this helps development teams work more efficiently, collaborate effectively, and deliver higher-quality software.

Learn more

Dive deeper into the world of containers and the benefits of adopting a container-first model in your software development by downloading the full white paper, Cracking the Code: Effectively Managing All of Those Applications.

❌
❌