Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

From Misconceptions to Mastery: Enhancing Security and Transparency with Docker Official Images

4 avril 2024 à 14:01

Docker Official Images are a curated set of Docker repositories hosted on Docker Hub that provide a wide range of pre-configured images for popular language runtimes and frameworks, cloud-first utilities, data stores, and Linux distributions. These images are maintained and vetted, ensuring they meet best practices for security, usability, and versioning, making it easier for developers to deploy and run applications consistently across different environments.

Docker Official Images are an important component of Docker’s commitment to the security of both the software supply chain and open source software. Docker Official Images provide thousands of images you can use directly or as a base image when building your own images. For example, there are Docker Official Images for Alpine Linux, NGINX, Ubuntu, PostgreSQL, Python, and Node.js. Visit Docker Hub to search through the currently available Docker Official Images.

In this blog post, we address three common misconceptions about Docker Official Images and outline seven ways they help secure the software supply chain.

banner docker official images part1

3 common misconceptions about Docker Official Images

Even though Docker Official Images have been around for more than a decade and have been used billions of times, they are somewhat misunderstood. Who “owns” Docker Official Images? What is with all those tags? How should you use Docker Official Images? Let’s address some of the more common misconceptions.

Misconception 1: Docker Official Images are controlled by Docker

Docker Official Images are maintained through a partnership between upstream maintainers, community volunteers, and Docker engineers. External developers maintain the majority of Docker Official Images Dockerfiles, with Docker engineers providing insight and review to ensure best practices and uniformity across the Docker Official Images catalog. Additionally, Docker provides and maintains the Docker Official Images build infrastructure and logic, ensuring consistent and secure build environments that allow Docker Official Images to support more than 10 architecture/operating system combinations.

Misconception 2: Docker Official Images are designed for a single use case

Most Docker Official Images repositories offer several image variants and maintain multiple supported versions. In other words, the latest tag of a Docker Official Image might not be the right choice for your use case. 

Docker Official Images tags

The documentation for each Docker Official Images repository contains a “Supported tags and respective Dockerfile links” section that lists all the current tags with links to the Dockerfiles that created the image with those tags (Figure 1). This section can be a little intimidating for first-time users, but keeping in mind a few conventions will allow even novices to understand what image variants are available and, more importantly, which variant best fits their use case.

supported tags doi f1
Figure 1: Documentation showing the current tags with links to the Dockerfiles that created the image with those tags.
  • Tags listed on the same line all refer to the same underlying image. (Multiple tags can point to the same image.) For example, Figure 1 shows the ubuntu Docker Official Images repository, where the 20.04, focal-20240216, and focal tags all refer to the same image.
  • Often the latest tag for a Docker Official Images repository is optimized for ease of use and includes a wide variety of software helpful, but not strictly necessary, when using the main software packaged in the Docker Official Image. For example, latest images often include tools like Git and build tools. Because of their ease of use and wide applicability, latest images are often used in getting-started guides.
  • Some operating system and language runtime repositories offer “slim” variants that have fewer packages installed and are therefore smaller. For example, the python:3.12.2-bookworm image contains not only the Python runtime, but also any tool you might need to build and package your Python application — more than 570 packages! Compare this to the python:3.12.2-slim-bookworm image, which has about 150 packages.
  • Many Docker Official Images repositories offer “alpine” variants built on top of the Alpine Linux distribution rather than Debian or Ubuntu. Alpine Linux is focused on providing a small, simple, and secure base for container images, and Docker Official Images alpine variants typically aim to install only necessary packages. As a result, Docker Official Images alpine variants are typically even smaller than “slim” variants. For example, the linux/amd64 node:latest image is 382 MB, the node:slim image is 70 MB, and the node:alpine image is 47 MB.
  • If you see tags with words that look like Toy Story characters (for example, bookworm, bullseye, and trixie) or adjectives (such as jammy, focal, and bionic), those indicate the codename of the Linux distribution they use as a base image. Debian-release codenames are based on Toy Story characters, and Ubuntu releases use alliterative adjective-animal appellations. Linux distribution indicators are helpful because many Docker Official Images provide variants built upon multiple underlying distribution versions (for example, postgres:bookworm and postgres:bullseye).
  • Tags may contain other hints to the purpose of their image variant. Often these are explained later in the Docker Official Images repository documentation. Check the “How to use this image” and/or “Image Variants” sections.

Misconception 3: Docker Official Images do not follow software development best practices

Some critics argue that Docker Official Images go against the grain of best practices, such as not running container processes as root. While it’s true that we encourage users to embrace a few opinionated standards, we also recognize that different use cases require different approaches. For example, some use cases may require elevated privileges for their workloads, and we provide options for them to do so securely.

7 ways Docker Official Images help secure the software supply chain

We recognize that security is a continuous process, and we’re committed to providing the best possible experience for our users. Since the company’s inception in 2013, Docker has been a leader in the software supply chain, and our commitment to security — including open source security — has helped to protect developers from emerging threats all along the way.

With the availability of open source software, efficiently building powerful applications and services is easier than ever. The transparency of open source allows unprecedented insight into the security posture of the software you create. But to take advantage of the power and transparency of open source software, fully embracing software supply chain security is imperative. A few ways Docker Official Images help developers build a more secure software supply chain include:

  1. Open build process 

Because visibility is an important aspect of the software supply chain, Docker Official Images are created from a transparent and open build process. The Dockerfile inputs and build scripts are all open source, all Docker Official Images updates go through a public pull request process, and the logs from all Docker Official Images builds are available to inspect (Jenkins / GitHub Actions).

  1. Principle of least privilege

The Docker Official Images build system adheres strictly to the principle of least privilege (POLP), for example, by restricting writes for each architecture to architecture-specific build agents. 

  1. Updated build system 

Ensuring the security of Docker Official Images builds and images is paramount. The Docker Official Images build system is kept up to date through automated builds, regular security audits, collaboration with upstream projects, ongoing testing, and security patches. 

  1. Vulnerability reports and continuous monitoring

Courtesy of Docker Scout, vulnerability insights are available for all Docker Official Images and are continuously updated as new vulnerabilities are discovered. We are committed to continuously monitoring our images for security issues and addressing them promptly. For example, we were among the first to provide reasoned guidance and remediation for the recent xz supply chain attack. We also use insights and remediation guidance from Docker Scout, which surfaces actionable insights in near-real-time by updating CVE results from 20+ CVE databases every 20-60 minutes.

  1. Software Bill of Materials (SBOM) and provenance attestations 

We are committed to providing a complete and accurate SBOM and detailed build provenance as signed attestations for all Docker Official Images. This allows our users to have confidence in the origin of Docker Official Images and easily identify and mitigate any potential vulnerabilities.

  1. Signature validation 

We are working on integrating signature validation into our image pull and build processes. This will ensure that all Docker Official Images are verified before use, providing an additional layer of security for our users.

  1. Increased update frequency 

Docker Official Images provide the best of both worlds: the latest version of the software you want, built upon stable versions of Linux distributions. This allows you to use the latest features and fixes of the software you are running without having to wait for a new package from your Linux distribution or being forced to use an unstable version of your Linux distribution. Further, we are working to increase the throughput of the Docker Official Images build infrastructure to allow us to support more frequent updates for larger swaths of Docker Official Images. As part of this effort, we are piloting builds on GitHub Actions and Docker Build Cloud.

Conclusion

Docker’s leadership in security and protecting open source software has been established through Docker Official Images and other trusted content we provide our customers. We take a comprehensive approach to security, focusing on best practices, tooling, and community engagement, and we work closely with upstream projects and SIGs to address security issues promptly and proactively.

Docker Official Images provide a flexible and secure way for developers to build, ship, test, and run their applications. Docker Official Images are maintained through a partnership between the Docker Official Images community, upstream maintainers/volunteers, and Docker engineers, ensuring best practices and uniformity across the Docker Official Images catalog. Each Docker Official Image offers numerous image variants that cater to different use cases, with tags indicating the purpose of each variant. 

Developers can build using Docker tools and products with confidence, knowing that their applications are built on a secure, transparent foundation. 

Looking to dive in? Get started building with Docker Official Images today.

Learn more

Is Your Container Image Really Distroless?

27 mars 2024 à 13:25

Containerization helped drastically improve the security of applications by providing engineers with greater control over the runtime environment of their applications. However, a significant time investment is required to maintain the security posture of those applications, given the daily discovery of new vulnerabilities as well as regular releases of languages and frameworks. 

The concept of “distroless” images offers the promise of greatly reducing the time needed to keep applications secure by eliminating most of the software contained in typical container images. This approach also reduces the amount of time teams spend remediating vulnerabilities, allowing them to focus only on the software they are using. 

In this article, we explain what makes an image distroless, describe tools that make the creation of distroless images practical, and discuss whether distroless images live up to their potential.

2400x1260 is your image really distroless

What’s a distro?

A Linux distribution is a complete operating system built around the Linux kernel, comprising a package management system, GNU tools and libraries, additional software, and often a graphical user interface.

Common Linux distributions include Debian, Ubuntu, Arch Linux, Fedora, Red Hat Enterprise Linux, CentOS, and Alpine Linux (which is more common in the world of containers). These Linux distributions, like most Linux distros, treat security seriously, with teams working diligently to release frequent patches and updates to known vulnerabilities. A key challenge that all Linux distributions must face involves the usability/security dilemma. 

On its own, the Linux kernel is not very usable, so many utility commands are included in distributions to cover a large array of use cases. Having the right utilities included in the distribution without having to install additional packages greatly improves a distro’s usability. The downside of this increase in usability, however, is an increased attack surface area to keep up to date. 

A Linux distro must strike a balance between these two elements, and different distros have different approaches to doing so. A key aspect to keep in mind is that a distro that emphasizes usability is not “less secure” than one that does not emphasize usability. What it means is that the distro with more utility packages requires more effort from its users to keep it secure.

Multi-stage builds

Multi-stage builds allow developers to separate build-time dependencies from runtime ones. Developers can now start from a full-featured build image with all the necessary components installed, perform the necessary build step, and then copy only the result of those steps to a more minimal or even an empty image, called “scratch”. With this approach, there’s no need to clean up dependencies and, as an added bonus, the build stages are also cacheable, which can considerably reduce build time. 

The following example shows a Go program taking advantage of multi-stage builds. Because the Golang runtime is compiled into the binary, only the binary and root certificates need to be copied to the blank slate image.

FROM golang:1.21.5-alpine as build
WORKDIR /
COPY go.* .
RUN go mod download
COPY . .
RUN go build -o my-app


FROM scratch
COPY --from=build
  /etc/ssl/certs/ca-certificates.crt
  /etc/ssl/certs/ca-certificates.crt
COPY --from=build /my-app /usr/local/bin/my-app
ENTRYPOINT ["/usr/local/bin/my-app"]

BuildKit

BuildKit, the current engine used by docker build, helps developers create minimal images thanks to its extensible, pluggable architecture. It provides the ability to specify alternative frontends (with the default being the familiar Dockerfile) to abstract and hide the complexity of creating distroless images. These frontends can accept more streamlined and declarative inputs for builds and can produce images that contain only the software needed for the application to run. 

The following example shows the input for a frontend for creating Python applications called mopy by Julian Goede.

#syntax=cmdjulian/mopy
apiVersion: v1
python: 3.9.2
build-deps:
  - libopenblas-dev
  - gfortran
  - build-essential
envs:
  MYENV: envVar1
pip:
  - numpy==1.22
  - slycot
  - ./my_local_pip/
  - ./requirements.txt
labels:
  foo: bar
  fizz: ${mopy.sbom}
project: my-python-app/

So, is your image really distroless?

Thanks to new tools for creating container images like multi-stage builds and BuildKit, it is now a lot more practical to create images that only contain the required software and its runtime dependencies. 

However, many images claiming to be distroless still include a shell (usually Bash) and/or BusyBox, which provides many of the commands a Linux distribution does — including wget — that can leave containers vulnerable to Living off the land (LOTL) attacks. This raises the question, “Why would an image trying to be distroless still include key parts of a Linux distribution?” The answer typically involves container initialization. 

Developers often have to make their applications configurable to meet the needs of their users. Most of the time, those configurations are not known at build time so they need to be configured at run time. Often, these configurations are applied using shell initialization scripts, which in turn depend on common Linux utilities such as sed, grep, cp, etc. When this is the case, the shell and utilities are only needed for the first few seconds of the container’s lifetime. Luckily, there is a way to create true distroless images while still allowing initialization using tools available from most container orchestrators: init containers.

Init containers

In Kubernetes, an init container is a container that starts and must complete successfully before the primary container can start. By using a non-distroless container as an init container that shares a volume with the primary container, the runtime environment and application can be configured before the application starts. 

The lifetime of that init container is short (often just a couple seconds), and it typically doesn’t need to be exposed to the internet. Much like multi-stage builds allow developers to separate the build-time dependencies from the runtime dependencies, init containers allow developers to separate initialization dependencies from the execution dependencies. 

The concept of init container may be familiar if you are using relational databases, where an init container is often used to perform schema migration before a new version of an application is started.

Kubernetes example

Here are two examples of using init containers. First, using Kubernetes:

apiVersion: v1
kind: Pod
metadata:
  name: kubecon-postgress-pod
  labels:
    app.kubernetes.io/name: KubeConPostgress
spec:
  containers:
  - name: postgress
    image: laurentgoderre689/postgres-distroless
    securityContext:
      runAsUser: 70
      runAsGroup: 70
    volumeMounts:
    - name: db
      mountPath: /var/lib/postgresql/data/
  initContainers:
  - name: init-postgress
    image: postgres:alpine3.18
    env:
      - name: POSTGRES_PASSWORD
        valueFrom:
          secretKeyRef:
            name: kubecon-postgress-admin-pwd
            key: password
    command: ['docker-ensure-initdb.sh']
    volumeMounts:
    - name: db
      mountPath: /var/lib/postgresql/data/
  volumes:
  - name: db
    emptyDir: {}

- - - 

> kubectl apply -f pod.yml && kubectl get pods
pod/kubecon-postgress-pod created
NAME                    READY   STATUS     RESTARTS   AGE
kubecon-postgress-pod   0/1     Init:0/1   0          0s
> kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
kubecon-postgress-pod   1/1     Running   0          10s

Docker Compose example

The init container concept can also be emulated in Docker Compose for local development using service dependencies and conditions.

services:
 db:
   image: laurentgoderre689/postgres-distroless
   user: postgres
   volumes:
     - pgdata:/var/lib/postgresql/data/
   depends_on:
     db-init:
       condition: service_completed_successfully

 db-init:
   image: postgres:alpine3.18
   environment:
      POSTGRES_PASSWORD: example
   volumes:
     - pgdata:/var/lib/postgresql/data/
   user: postgres
    command: docker-ensure-initdb.sh

volumes:
 pgdata:

- - - 
> docker-compose up 
[+] Running 4/0
 ✔ Network compose_default      Created                                                                                                                      
 ✔ Volume "compose_pgdata"      Created                                                                                                                     
 ✔ Container compose-db-init-1  Created                                                                                                                      
 ✔ Container compose-db-1       Created                                                                                                                      
Attaching to db-1, db-init-1
db-init-1  | The files belonging to this database system will be owned by user "postgres".
db-init-1  | This user must also own the server process.
db-init-1  | 
db-init-1  | The database cluster will be initialized with locale "en_US.utf8".
db-init-1  | The default database encoding has accordingly been set to "UTF8".
db-init-1  | The default text search configuration will be set to "english".
db-init-1  | [...]
db-init-1 exited with code 0
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  starting PostgreSQL 16.1 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r10) 12.2.1 20220924, 64-bit
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db-1       | 2024-02-23 14:59:33.191 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db-1       | 2024-02-23 14:59:33.194 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1       | 2024-02-23 14:59:33.196 UTC [9] LOG:  database system was shut down at 2024-02-23 14:59:32 UTC
db-1       | 2024-02-23 14:59:33.198 UTC [1] LOG:  database system is ready to accept connections

As demonstrated by the previous example, an init container can be used alongside a container to remove the need for general-purpose software and allow the creation of true distroless images. 

Conclusion

This article explained how Docker build tools allow for the separation of build-time dependencies from run-time dependencies to create “distroless” images. For example, using init containers allows developers to separate the logic needed to configure a runtime environment from the environment itself and provide a more secure container. This approach also helps teams focus their efforts on the software they use and find a better balance between security and usability.

Learn more

💾

Don't miss out! Join us at our next Flagship Conference: KubeCon + CloudNativeCon North America in Salt Lake City from November 12 - 15, 2024. Connect with o...

Unseen Dangers Unveiled: Detecting Security Threats with Falco

16 octobre 2023 à 15:52

Today we delve into the world of cyber security with Falco, a powerful open-source tool that detects and unveils unseen dangers. Learn how Falco can help businesses identify security threats in real-time, providing invaluable protection against potential breaches and attacks.

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands: manuscript/security/falco.sh
🔗 Falco: https://falco.org

▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)

▬▬▬▬▬▬ 🚀 Livestreams & podcasts 🚀 ▬▬▬▬▬▬
🎤 Podcast: https://www.devopsparadox.com/
💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Follow me on Twitter: https://twitter.com/vfarcic
➡ Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/

Container Security and Why It Matters

Par : Kat Yi
9 août 2023 à 14:08

Are you thinking about container security? Maybe you are on a security team trying to manage rogue cloud resources. Perhaps you work in the DevOps space and know what container security is, but you want to figure out how to decrease the pain around security triaging your containers for everyone involved. 

In this post, we’ll look at security for containers in a scalable environment, how deployment to that environment can affect your rollout of container security, and how Docker can help.

Black padlock on light blue digital background

What is container security?

Container security is knowing that a container image you run in your environment includes only the libraries, base image, and any custom bits you declare in your Dockerfile,  and not malware or known vulnerabilities. (We’d also love to say no zero days, but such is the nature of the beast.)

You want to know that those libraries used to build your image and any base image behind it come from sources you expect — open source or otherwise — and are free from critical vulnerabilities, malware, and other surprises. 

The base image is usually a common image (for example, Alpine Linux, Ubuntu, or BusyBox) that is a building block upon which other companies add their own image layers. Think of an image layer as a step in the install process. Whenever you take a base image and add new libraries or steps to it for creation, you are essentially creating a new image.  

We’ve talked about the most immediate piece of container security, the image layers, but how is the image built and what is the source of those image layers?

Container image provenance

Here’s where container security gets tricky: the image build and source tracking process. You want assurances that your images, libraries, and any base images you depend on contain what you expect them to and not anything nefarious. So you should care about image provenance: where an image gets built, who builds it, and where it gets stored. 

You should pay attention to any infrastructure or automation used to build your images, which typically means continuous integration (CI) tooling such as GitHub Actions, AWS CodeBuild, or CircleCI. You need to ensure any workloads running your image builds are on build environments with minimal access and potential attack surfaces. You need to consider who has access to your GitHub actions runners, for example. Do you need to create a VPN connection from your runner to your cloud account? If so, what are the security protections on that VPN connection? Consider the confidentiality and integrity of your image pipeline carefully. 

To put it more directly: Managing container provenance in cloud workloads can make deployments easier, but it can also make it easier to deploy malware at scale if you aren’t careful. The nature of the cloud is that it adds complexity, not necessarily security.

Software Bill of Materials (SBOM) attestations can also help ensure that only what you want is inside your images. With an SBOM, you can review a list of all the libraries and dependencies used to build your image and ensure the versioning and content matches what you expect by viewing an SBOM attestation. Docker Engine provides this with docker sbom and Docker BuildKit provides it in versions newer than 0.11. 

Other considerations with SBOM attestations include attestation provider trust and protection from man-in-the-middle attacks, such as replacing libraries in the image. Docker is working to create signed SBOM attestations for images to create strong assurances around SBOM to help strengthen this part of image security.

You also want to consider software composition analysis (SCA) against your images to ensure open source tooling and licenses are as expected. Docker Official Images, for example, have a certified seal of provenance behind them for your base image, which provides assurance around a base image you might be using.

Vulnerability and malware scanning

And what about potential CVEs and malware? How do you scan your images at scale for those issues? 

A number of static scanning tools are available for CVE scanning, and some provide dynamic malware scanning. When researching tools in this space, consider what you use for your image repository, such as Docker Hub, Amazon Elastic Container Registry (ECR), Artifact Registry, or an on-premises/in-colocation option like Nexus. Depending on the dynamics and security controls you have in place on your registry, one tooling option might make more sense than another. For example, AWS ECR offers some static vulnerability scanning out of the box. Some other options bundle software composition analysis (SCA) scanning of images as well. 

The trick is to find a tool with the right signal-to-noise mix for your team. For example, you might want static scanning but minimal false positives and the ability to create exclusions. 

As with any static vulnerability scanning tool, the Common Vulnerability Scoring System (CVSS) score of a vulnerability is just a starting point. Only you and your team can determine the exploitability, possible risks, and attack surface of a particular vulnerability and whether those factors outweigh the potential effects of upgrading or changing an image deployed at scale in your environment.

In other words, a scanning tool might find some high or critical (per CVSS scoring) vulnerabilities in some of your images. Still, those vulnerabilities might not be exploitable because the affected images are only used internally inside a virtual private cloud (VPC) in your environment with no external access. But you’d want to ensure that the image stays internal and isn’t used for production. So guardrails, monitoring, and gating around the use of that image and it staying in internal workloads only is a must. 

Finally, imagine an image that is pervasive and used across all your workloads. The effort to upgrade that image might take several sprint cycles for your engineering teams to safely deploy and require service downtime as you unravel the library dependencies. Regarding vulnerability rating for the two examples — an internal-only image and a pervasive image that is difficult to upgrade — you might want to lower the priority of the vulnerability in the former and slowly track progress toward remediating the latter. 

Docker’s Security Team is intimately familiar with two of the biggest blockers security teams face: time and resources. Your team might not be able to triage and remediate all vulnerabilities across production, development, and staging environments, especially if your team is just starting its journey with container security. So start with what you can and must do something about: production images.

Production vs. non-production

Only container images that have gone through appropriate approval and automation workflows should be deployed in production. Like any mature CI/CD workflow, this means thorough testing in non-production environments, scanning before release to production, and monitoring and guardrails around images that are already live in production with things like cloud resource tagging, version control, and appropriate role-based access control around who can approve an image’s deployment to production. 

At its root, this means that Security teams that have not previously had their feet in the infrastructure or DevOps team’s ocean of work in your company’s cloud accounts should. Just as DevOps culture has caused a shift for developers in handling infrastructure, scaling, and service decisions in the cloud, the same shift is happening in the security community with DevSecOps culture and Security Engineering. It is in the middle of this intersection where container security resides.

Not only does your tool choice matter in terms of best-fit for your environment’s landscape with container security, your ability to collaborate with your infrastructure, engineering, and DevOps teams matters even more for this work. To reiterate, to get a good handle on gating production deployments and having good automation and monitoring tied to those production deployments and resources, security teams must familiarize themselves with this space and get comfortable in this intersection. Good tooling can help make all the difference in fostering that culture of collaboration, especially for a security team new to this space.

Container security tools: What to look for

Like any well-thought-out tool selection, sometimes what matters most is not the number of bells and whistles a tool offers but the tool’s fit to your organization’s needs and gaps.

Avoid container security tools that promise to be the silver bullet. Instead, think of tools that will help your team conquer small challenges today and work to build on goals for the larger challenges down the road. (Security folks know that any tool on the market promising to be a silver bullet is just selling something and isn’t a reality with the ever-changing threat landscape.)

In short, tools for container security should enable your workflow and build trust and facilitate cross-team collaboration from Engineering to Security to DevOps, not tools that become a landscape of noise and overwhelming visuals for your engineers. And here’s where Docker Scout can help.

Docker Scout

Docker engineers have been working on a new product to help increase container security: Docker Scout. Scout gives you the list of discovered vulnerabilities in your container images and offers guidance for remediation in an iterative small-improvements style. You can compare your scores from one deployment to the next and show improvement to create a sense of accomplishment for your teams, not an overwhelming bombardment of vulnerabilities and risk that seems insurmountable.

Screen showing image comparison with two images to see the differences in vulnerabilities and packages.

Docker Scout lets you set target goals for your images and markers for iterative improvement. You can define different goals for production images versus development or staging images so that each environment gets the level of security it needs.

Conclusion

As with most security problems, there is no silver bullet with container security. The technical, operational, and organizational moving pieces that go into protecting your company’s container images often reside at the boundaries between teams, functions, and responsibilities. This adds complexity to an already complex problem. Rather than further adding to the burdens created by this complexity, you should look for tools that enable your teams to work together and reach a deeper understanding of where goals, risks, and priorities overlap and coexist.

Even more importantly, look for container security solutions that are clear about what they can offer you and extend help in areas where they do not have offerings. 

Whether you are a security team member new to the ocean of DevOps and container security or have been in these security waters for a while, Docker is here to help support you and get to more stable waters. We are beside you in this ocean and trying to make the space better for ourselves, our customers, and developers who use Docker all over the world.

Learn more

❌
❌