Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Enhancing Container Security with Docker Scout and Secure Repositories

Par : Jay Schmidt
25 novembre 2024 à 14:43

Docker Scout simplifies the integration with container image repositories, improving the efficiency of container image approval workflows without disrupting or replacing current processes. Positioned outside the repository’s stringent validation framework, Docker Scout serves as a proactive measure to significantly reduce the time needed for an image to gain approval. 

By shifting security checks left and integrating Docker Scout into the early stages of the development cycle, issues are identified and addressed directly on the developer’s machine.

2400x1260 generic scout blog d

Minimizing vulnerabilities 

This leftward shift in security accelerates the development process by keeping developers in flow, providing immediate feedback on policy violations at the point of development. As a result, images are secured and reviewed for compliance before being pushed into the continuous integration/continuous deployment (CI/CD) pipeline, reducing reliance on resource-heavy, consumption-based scans (Figure 1). By resolving issues earlier, Docker Scout minimizes the number of vulnerabilities detected during the CI/CD process, freeing up the security team to focus on higher-priority tasks.

Sample secure repo pipeline showing images are secured and reviewed for compliance before being pushed into the continuous integration/continuous deployment (CI/CD) pipeline, reducing reliance on resource-heavy, consumption-based scans.
Figure 1: Sample secure repository pipeline.

Additionally, the Docker Scout console allows the security team to define custom security policies and manage VEX (Vulnerability Exploitability eXchange) statements. VEX is a standard that allows vendors and other parties to communicate the exploitability status of vulnerabilities, allowing for the creation of justifications for including software that has been tied to Common Vulnerabilities and Exposures (CVE).

This feature enables seamless collaboration between development and security teams, ensuring that developers are working with up-to-date compliance guidelines. The Docker Scout console can also feed critical data into existing security tooling, enriching the organization’s security posture with more comprehensive insights and enhancing overall protection (Figure 2).

Sample secure repo pipeline with scout: The Docker Scout console can also feed critical data into existing security tooling, enriching the organization’s security posture with more comprehensive insights and enhancing overall protection.
Figure 2: Sample secure repository pipeline with Docker Scout.

How to secure image repositories

A secure container image repository provides digitally signed, OCI-compliant images that are rebuilt and rescanned nightly. These repositories are typically used in highly regulated or security-conscious environments, offering a wide range of container images, from open source software to commercial off-the-shelf (COTS) products. Each image in the repository undergoes rigorous security assessments to ensure compliance with strict security standards before being deployed in restricted or sensitive environments.

Key components of the repository include a hardened source code repository and an OCI-compliant registry (Figure 3). All images are continuously scanned for vulnerabilities, stored secrets, problematic code, and compliance with various standards. Each image is assigned a score upon rebuild, determining its compliance and suitability for use. Scanning reports and justifications for any potential issues are typically handled using the VEX format.

Key components of the repository include a hardened source code repository and an OCI-compliant registry
Figure 3: Key components of the repository include a hardened source code repository and an OCI-compliant registry.

Why use a hardened image repository?

A hardened image repository mitigates the security risks associated with deploying containers in sensitive or mission-critical environments. Traditional software deployment can expose organizations to vulnerabilities and misconfigurations that attackers can exploit. By enforcing a strict set of requirements for container images, the hardened image repository ensures that images meet the necessary security standards before deployment. Rebuilding and rescanning each image daily allows for continuous monitoring of new vulnerabilities and emerging attack vectors.

Using pre-vetted images from a hardened repository also streamlines the development process, reducing the load on development teams and enabling faster, safer deployment.

In addition to addressing security risks, the repository also ensures software supply chain security by incorporating software bills of materials (SBOMs) with each image. The SBOM of a container image can provide an inventory of all the components that were used to build the image, including operating system packages, application specific dependencies with its versions, and license information. By maintaining a robust vetting process, the repository guarantees that all software components are traceable, verifiable, and tamper-free — essential for ensuring the integrity and reliability of deployed software.

Who uses a hardened image repository?

The main users of a hardened container image repository include internal developers responsible for creating applications, developers working on utility images, and those responsible for building base images for other containerized applications. Note that the titles for these roles can vary by organization.

  • Application developers use the repository to ensure that the images their applications are built upon meet the required security and compliance standards.
  • DevOps engineers are responsible for building and maintaining the utility images that support various internal operations within the organization.
  • Platform developers create and maintain secure base images that other teams can use as a foundation for their containerized applications.

Daily builds

One challenge with using a hardened image repository is the time needed to approve images. Daily rebuilds are conducted to assess each image for vulnerabilities and policy violations, but issues can emerge, requiring developers to make repeated passes through the pipeline. Because rebuilds are typically done at night, this process can result in delays for development teams, as they must wait for the next rebuild cycle to resolve issues.

Enter Docker Scout

Integrating Docker Scout into the pre-submission phase can reduce the number of issues that enter the pipeline. This proactive approach helps speed up the submission and acceptance process, allowing development teams to catch issues before the nightly scans. 

Vulnerability detection and management

  • Requirement: Images must be free of known vulnerabilities at the time of submission to avoid delays in acceptance.
  • Docker Scout contribution:
    • Early detection: Docker Scout can scan Docker images during development to detect vulnerabilities early, allowing developers to resolve issues before submission.
    • Continuous analysis: Docker Scout continually reviews uploaded SBOMs, providing early warnings for new critical CVEs and ensuring issues are addressed outside of the nightly rebuild process.
    • Justification handling: Docker Scout supports VEX for handling exceptions. This can streamline the justification process, enabling developers to submit justifications for potential vulnerabilities more easily.

Security best practices and configuration management

  • Requirement: Images must follow security best practices and configuration guidelines, such as using secure base images and minimizing the attack surface.
  • Docker Scout contribution:
    • Security posture enhancement: Docker Scout allows teams to set policies that align with repository guidelines, checking for policy violations such as disallowed software or unapproved base images.

Compliance with dependency management

  • Requirement: All dependencies must be declared, and internet access during the build process is usually prohibited.
  • Docker Scout contribution:
    • Dependency scanning: Docker Scout identifies outdated or vulnerable libraries included in the image.
    • Automated reports: Docker Scout generates security reports for each dependency, which can be used to cross-check the repository’s own scanning results.

Documentation and provenance

  • Requirement: Images must include detailed documentation on their build process, dependencies, and configurations for auditing purposes.
  • Docker Scout contribution:
    • Documentation support: Docker Scout contributes to security documentation by providing data on the scanned image, which can be used as part of the official documentation submitted with the image.

Continuous compliance

  • Requirement: Even after an image is accepted into the repository, it must remain compliant with new security standards and vulnerability disclosures.
  • Docker Scout contribution:
    • Ongoing monitoring: Docker Scout continuously monitors images, identifying new vulnerabilities as they emerge, ensuring that images in the repository remain compliant with security policies.

By utilizing Docker Scout in these areas, developers can ensure their images meet the repository’s rigorous standards, thereby reducing the time and effort required for submission and review. This approach helps align development practices with organizational security objectives, enabling faster deployment of secure, compliant containers.

Integrating Docker Scout into the CI/CD pipeline

Integrating Docker Scout into an organization’s CI/CD pipeline can enhance image security from the development phase through to deployment. By incorporating Docker Scout into the CI/CD process, the organization can automate vulnerability scanning and policy checks before images are pushed into production, significantly reducing the risk of deploying insecure or non-compliant images.

  • Integration with build pipelines: During the build stage of the CI/CD pipeline, Docker Scout can be configured to automatically scan Docker images for vulnerabilities and adherence to security policies. The integration would typically involve adding a Docker Scout scan as a step in the build job, for example through a GitHub action. If Docker Scout detects any issues such as outdated dependencies, vulnerabilities, or policy violations, the build can be halted, and feedback is provided to developers immediately. This early detection helps resolve issues long before images are pushed to the hardened image repository.
  • Validation in the deployment pipeline: As images move from development to production, Docker Scout can be used to perform final validation checks. This step ensures that any security issues that might have arisen since the initial build have been addressed and that the image is compliant with the latest security policies. The deployment process can be gated based on Docker Scout’s reports, preventing insecure images from being deployed. Additionally, Docker Scout’s continuous analysis of SBOMs means that even after deployment, images can be monitored for new vulnerabilities or compliance issues, providing ongoing protection throughout the image lifecycle.

By embedding Docker Scout directly into the CI/CD pipeline (Figure 1), the organization can maintain a proactive approach to security, shifting left in the development process while ensuring that each image deployed is safe, compliant, and up-to-date.

Defense in depth and Docker Scout’s role

In any organization that values security, adopting a defense-in-depth strategy is essential. Defense in depth is a multi-layered approach to security, ensuring that if one layer of defense is compromised, additional safeguards are in place to prevent or mitigate the impact. This strategy is especially important in environments that handle sensitive data or mission-critical operations, where even a single vulnerability can have significant consequences.

Docker Scout plays a vital role in this defense-in-depth strategy by providing a proactive layer of security during the development process. Rather than relying solely on post-submission scans or production monitoring, Docker Scout integrates directly into the development and CI/CD workflows (Figure 2), allowing teams to catch and resolve security issues early. This early detection prevents issues from escalating into more significant risks later in the pipeline, reducing the burden on the SecOps team and speeding up the deployment process.

Furthermore, Docker Scout’s continuous monitoring capabilities mean that images are not only secure at the time of deployment but remain compliant with evolving security standards and new vulnerabilities that may arise after deployment. This ongoing vigilance forms a crucial layer in a defense-in-depth approach, ensuring that security is maintained throughout the entire lifecycle of the container image.

By integrating Docker Scout into the organization’s security processes, teams can build a more resilient, secure, and compliant software environment, ensuring that security is deeply embedded at every stage from development to deployment and beyond.

Learn more

Mastering Kubernetes Scaling: From Manual Adjustments to Intelligent Automation in just 8 steps.

Par : Adesoji Alu
16 octobre 2024 à 16:56
Scaling applications in Kubernetes is essential for maintaining optimal performance, ensuring high availability, and managing resource utilization effectively. Whether you’re handling fluctuating traffic or optimizing costs, understanding how to scale your Kubernetes deployments is crucial. In this blog, we’ll delve into the intricacies of scaling in Kubernetes, explore manual and automated scaling techniques using kubectl […]

Docker Best Practices: Using ARG and ENV in Your Dockerfiles

Par : Jay Schmidt
16 octobre 2024 à 14:35

If you’ve worked with Docker for any length of time, you’re likely accustomed to writing or at least modifying a Dockerfile. This file can be thought of as a recipe for a Docker image; it contains both the ingredients (base images, packages, files) and the instructions (various RUN, COPY, and other commands that help build the image).

In most cases, Dockerfiles are written once, modified seldom, and used as-is unless something about the project changes. Because these files are created or modified on such an infrequent basis, developers tend to rely on only a handful of frequently used instructions — RUN, COPY, and EXPOSE being the most common. Other instructions can enhance your image, making it more configurable, manageable, and easier to maintain. 

In this post, we will discuss the ARG and ENV instructions and explore why, how, and when to use them.

2400x1260 best practices

ARG: Defining build-time variables

The ARG instruction allows you to define variables that will be accessible during the build stage but not available after the image is built. For example, we will use this Dockerfile to build an image where we make the variable specified by the ARG instruction available during the build process.

FROM ubuntu:latest
ARG THEARG="foo"
RUN echo $THEARG
CMD ["env"]

If we run the build, we will see the echo foo line in the output:

$ docker build --no-cache -t argtest .
[+] Building 0.4s (6/6) FINISHED                                                                     docker:desktop-linux
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo foo                                                                                               0.1s
 => exporting to image                                                                                               0.0s
<-- SNIP -->

However, if we run the image and inspect the output of the env command, we do not see THEARG:

$ docker run --rm argtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=d19f59677dcd
HOME=/root

ENV: Defining build and runtime variables

Unlike ARG, the ENV command allows you to define a variable that can be accessed both at build time and run time:

FROM ubuntu:latest
ENV THEENV="bar"
RUN echo $THEENV
CMD ["env"]

If we run the build, we will see the echo bar line in the output:

$ docker build -t envtest .
[+] Building 0.8s (7/7) FINISHED                                                                     docker:desktop-linux
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo bar                                                                                               0.1s
 => exporting to image                                                                                               0.0s
<-- SNIP -->

If we run the image and inspect the output of the env command, we do see THEENV set, as expected:

$ docker run --rm envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=f53f1d9712a9
THEENV=bar
HOME=/root

Overriding ARG

A more advanced use of the ARG instruction is to serve as a placeholder that is then updated at build time:

FROM ubuntu:latest
ARG THEARG
RUN echo $THEARG
CMD ["env"]

If we build the image, we see that we are missing a value for $THEARG:

$ docker build -t argtest .
<-- SNIP -->
 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo $THEARG                                                                                           0.1s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

However, we can pass a value for THEARG on the build command line using the --build-arg argument. Notice that we now see THEARG has been replaced with foo in the output:

 => CACHED [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => [2/2] RUN echo foo                                                                                               0.1s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

The same can be done in a Docker Compose file by using the args key under the build key. Note that these can be set as a mapping (THEARG: foo) or a list (- THEARG=foo):

services:
  argtest:
    build:
      context: .
      args:
        THEARG: foo

If we run docker compose up --build, we can see the THEARG has been replaced with foo in the output:

$ docker compose up --build
<-- SNIP -->
 => [argtest 1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [argtest 2/2] RUN echo foo                                                                                0.0s
 => [argtest] exporting to image                                                                                     0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->
Attaching to argtest-1
argtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
argtest-1  | HOSTNAME=d9a3789ac47a
argtest-1  | HOME=/root
argtest-1 exited with code 0

Overriding ENV

You can also override ENV at build time; this is slightly different from how ARG is overridden. For example, you cannot supply a key without a value with the ENV instruction, as shown in the following example Dockerfile:

FROM ubuntu:latest
ENV THEENV
RUN echo $THEENV
CMD ["env"]

When we try to build the image, we receive an error:

$ docker build -t envtest .
[+] Building 0.0s (1/1) FINISHED                                                                     docker:desktop-linux
 => [internal] load build definition from Dockerfile                                                                 0.0s
 => => transferring dockerfile: 98B                                                                                  0.0s
Dockerfile:3
--------------------
   1 |     FROM ubuntu:latest
   2 |
   3 | >>> ENV THEENV
   4 |     RUN echo $THEENV
   5 |
--------------------
ERROR: failed to solve: ENV must have two arguments

However, we can remove the ENV instruction from the Dockerfile:

FROM ubuntu:latest
RUN echo $THEENV
CMD ["env"]

This allows us to build the image:

$ docker build -t envtest .
<-- SNIP -->
 => [1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [2/2] RUN echo $THEENV                                                                                    0.0s
 => exporting to image                                                                                               0.0s
 => => exporting layers                                                                                              0.0s
<-- SNIP -->

Then we can pass an environment variable via the docker run command using the -e flag:

$ docker run --rm -e THEENV=bar envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=638cf682d61f
THEENV=bar
HOME=/root

Although the .env file is usually associated with Docker Compose, it can also be used with docker run.

$ cat .env
THEENV=bar

$ docker run --rm --env-file ./.env envtest
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=59efe1003811
THEENV=bar
HOME=/root

This can also be done using Docker Compose by using the environment key. Note that we use the variable format for the value:

services:
  envtest:
    build:
      context: .
    environment:
      THEENV: ${THEENV}

If we do not supply a value for THEENV, a warning is thrown:

$ docker compose up --build
WARN[0000] The "THEENV" variable is not set. Defaulting to a blank string.
<-- SNIP -->
 => [envtest 1/2] FROM docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04  0.0s
 => => resolve docker.io/library/ubuntu:latest@sha256:8a37d68f4f73ebf3d4efafbcf66379bf3728902a8038616808f04e34a9ab6  0.0s
 => CACHED [envtest 2/2] RUN echo ${THEENV}                                                                          0.0s
 => [envtest] exporting to image                                                                                     0.0s
<-- SNIP -->
 ✔ Container dd-envtest-1    Recreated                                                                               0.1s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=816d164dc067
envtest-1  | THEENV=
envtest-1  | HOME=/root
envtest-1 exited with code 0

The value for our variable can be supplied in several different ways, as follows:

  • On the compose command line:
$ THEENV=bar docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Recreated                                                                               0.1s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0
  • In the shell environment on the host system:
$ export THEENV=bar
$ docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Created                                                                                 0.0s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0
  • In the special .env file:
$ cat .env
THEENV=bar

$ docker compose up

[+] Running 2/0
 ✔ Synchronized File Shares                                                                                          0.0s
 ✔ Container dd-envtest-1    Created                                                                                 0.0s
Attaching to envtest-1
envtest-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
envtest-1  | HOSTNAME=20f67bb40c6a
envtest-1  | THEENV=bar
envtest-1  | HOME=/root
envtest-1 exited with code 0

Finally, when running services directly using docker compose run, you can use the -e flag to override the .env file.

$ docker compose run -e THEENV=bar envtest

[+] Creating 1/0
 ✔ Synchronized File Shares                                                                                          0.0s
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=219e96494ddd
TERM=xterm
THEENV=bar
HOME=/root

The tl;dr

If you need to access a variable during the build process but not at runtime, use ARG. If you need to access the variable both during the build and at runtime, or only at runtime, use ENV.

To decide between them, consider the following flow (Figure 1):

build process

Both ARG and ENV can be overridden from the command line in docker run and docker compose, giving you a powerful way to dynamically update variables and build flexible workflows.

Learn more

Docker Best Practices: Using Tags and Labels to Manage Docker Image Sprawl

Par : Jay Schmidt
1 octobre 2024 à 12:51

With many organizations moving to container-based workflows, keeping track of the different versions of your images can become a problem. Even smaller organizations can have hundreds of container images spanning from one-off development tests, through emergency variants to fix problems, all the way to core production images. This leads us to the question: How can we tame our image sprawl while still rapidly iterating our images?

A common misconception is that by using the “latest” tag, you are guaranteeing that you are pulling the “latest version of the image. Unfortunately, this assumption is wrong — all latest means is “the last image pushed to this registry.”

Read on to learn more about how to avoid this pitfall when using Docker and how to get a handle on your Docker images.

Docker Best Practices: Using Tags and Labels to Manage Docker Image Sprawl

Using tags

One way to address this issue is to use tags when creating an image. Adding one or more tags to an image helps you remember what it is intended for and helps others as well. One approach is always to tag images with their semantic versioning (semver), which lets you know what version you are deploying. This sounds like a great approach, and, to some extent, it is, but there is a wrinkle.

Unless you’ve configured your registry for immutable tags, tags can be changed. For example, you could tag my-great-app as v1.0.0 and push the image to the registry. However, nothing stops your colleague from pushing their updated version of the app with tag v1.0.0 as well. Now that tag points to their image, not yours. If you add in the convenience tag latest, things get a bit more murky.

Let’s look at an example:

FROM busybox:stable-glibc

# Create a script that outputs the version
RUN echo -e "#!/bin/sh\n" > /test.sh && \
    echo "echo \"This is version 1.0.0\"" >> /test.sh && \
    chmod +x /test.sh

# Set the entrypoint to run the script
ENTRYPOINT ["/bin/sh", "/test.sh"]

We build the above with docker build -t tagexample:1.0.0 . and run it.

$ docker run --rm tagexample:1.0.0
This is version 1.0.0

What if we run it without a tag specified?

$ docker run --rm tagexample
Unable to find image 'tagexample:latest' locally
docker: Error response from daemon: pull access denied for tagexample, repository does not exist or may require 'docker login'.
See 'docker run --help'.

Now we build with docker build . without specifying a tag and run it.

$ docker run --rm tagexample
This is version 1.0.0

The latest tag is always applied to the most recent push that did not specify a tag. So, in our first test, we had one image in the repository with a tag of 1.0.0, but because we did not have any pushes without a tag, the latest tag did not point to an image. However, once we push an image without a tag, the latest tag is automatically applied to it.

Although it is tempting to always pull the latest tag, it’s rarely a good idea. The logical assumption — that this points to the most recent version of the image — is flawed. For example, another developer can update the application to version 1.0.1, build it with the tag 1.0.1, and push it. This results in the following:

$ docker run --rm tagexample:1.0.1
This is version 1.0.1

$ docker run --rm tagexample:latest
This is version 1.0.0

If you made the assumption that latest pointed to the highest version, you’d now be running an out-of-date version of the image.

The other issue is that there is no mechanism in place to prevent someone from inadvertently pushing with the wrong tag. For example, we could create another update to our code bringing it up to 1.0.2. We update the code, build the image, and push it — but we forget to change the tag to reflect the new version. Although it’s a small oversight, this action results in the following:

$ docker run --rm tagexample:1.0.1
This is version 1.0.2

Unfortunately, this happens all too frequently.

Using labels

Because we can’t trust tags, how should we ensure that we are able to identify our images? This is where the concept of adding metadata to our images becomes important.

The first attempt at using metadata to help manage images was the MAINTAINER instruction. This instruction sets the “Author” field (org.opencontainers.image.authors) in the generated image. However, this instruction has been deprecated in favor of the more powerful LABEL instruction. Unlike MAINTAINER, the LABEL instruction allows you to set arbitrary key/value pairs that can then be read with docker inspect as well as other tooling.

Unlike with tags, labels become part of the image, and when implemented properly, can provide a much better way to determine the version of an image. To return to our example above, let’s see how the use of a label would have made a difference.

To do this, we add the LABEL instruction to the Dockerfile, along with the key version and value 1.0.2.

FROM busybox:stable-glibc

LABEL version="1.0.2"

# Create a script that outputs the version
RUN echo -e "#!/bin/sh\n" > /test.sh && \
    echo "echo \"This is version 1.0.2\"" >> /test.sh && \
    chmod +x /test.sh

# Set the entrypoint to run the script
ENTRYPOINT ["/bin/sh", "/test.sh"]

Now, even if we make the same mistake above where we mistakenly tag the image as version 1.0.1, we have a way to check that does not involve running the container to see which version we are using.

$ docker inspect --format='{{json .Config.Labels}}' tagexample:1.0.1
{"version":"1.0.2"}

Best practices

Although you can use any key/value as a LABEL, there are recommendations. The OCI provides a set of suggested labels within the org.opencontainers.image namespace, as shown in the following table:

LabelContent
org.opencontainers.image.createdThe date and time on which the image was built (string, RFC 3339 date-time).
org.opencontainers.image.authorsContact details of the people or organization responsible for the image (freeform string).
org.opencontainers.image.urlURL to find more information on the image (string).
org.opencontainers.image.documentationURL to get documentation on the image (string).
org.opencontainers.image.sourceURL to the source code for building the image (string).
org.opencontainers.image.versionVersion of the packaged software (string).
org.opencontainers.image.revisionSource control revision identifier for the image (string).
org.opencontainers.image.vendorName of the distributing entity, organization, or individual (string).
org.opencontainers.image.licensesLicense(s) under which contained software is distributed (string, SPDX License List).
org.opencontainers.image.ref.nameName of the reference for a target (string).
org.opencontainers.image.titleHuman-readable title of the image (string).
org.opencontainers.image.descriptionHuman-readable description of the software packaged in the image (string).

Because LABEL takes any key/value, it is also possible to create custom labels. For example, labels specific to a team within a company could use the com.myorg.myteam namespace. Isolating these to a specific namespace ensures that they can easily be related back to the team that created the label.

Final thoughts

Image sprawl is a real problem for organizations, and, if not addressed, it can lead to confusion, rework, and potential production problems. By using tags and labels in a consistent manner, it is possible to eliminate these issues and provide a well-documented set of images that make work easier and not harder.

Learn more

Announcing Upgraded Docker Plans: Simpler, More Value, Better Development and Productivity 

12 septembre 2024 à 13:09

At Docker, our mission is to empower development teams by providing the tools they need to ship secure, high-quality apps — FAST. Over the past few years, we’ve continually added value for our customers, responding to the evolving needs of individual developers and organizations alike. Today, we’re excited to announce significant updates to our Docker subscription plans that will deliver even more value, flexibility, and power to your development workflows.

2400x1260 evergreen docker blog d

Docker accelerating the inner loop

We’ve listened closely to our community, and the message is clear: Developers want tools that meet their current needs and evolve with new capabilities to meet their future needs. 

That’s why we’ve revamped our plans to include access to ALL the tools our most successful customers are leveraging — Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud. Our new unified suite makes it easier for development teams to access everything they need under one subscription with included consumption for each new product and the ability to add more as they need it. This gives every paid user full access, including consumption-based options, allowing developers to scale resources as their needs evolve. Whether customers are individual developers, members of small teams, or work in large enterprises, the refreshed Docker Personal, Docker Pro, Docker Team, and Docker Business plans ensure developers have the right tools at their fingertips.

These changes increase access to Docker Hub across the board, bring more value into Docker Desktop, and grant access to the additional value and new capabilities we’ve delivered to development teams over the past few years. From Docker Scout’s advanced security and software supply chain insights to Docker Build Cloud’s productivity-generating cloud build capabilities, Docker provides developers with the tools to build, deploy, and verify applications faster and more efficiently.

Areas we’ve invested in during the past year include:

  • The world’s largest container registry. To date, Docker has invested more than $100 million in Docker Hub, which currently stores over 60 petabytes of data and handles billions of pulls each month. We have improved content discoverability, in-depth image analysis, image lifecycle management, and an even broader range of verified high-assurance content on Docker Hub. 
  • Improved insights. From Builds View to inspecting GitHub Actions builds to Build Checks to Scout health scores, we’re providing teams with more visibility into their usage and providing insights to improve their development outcomes. We have additional Docker Desktop insights coming later this year.
  • Securing the software supply chain. In October 2023, we launched Docker Scout, allowing developers to continuously address security issues before they hit production through policy evaluation and recommended remediations, and track the SBOM of their software. We later introduced new ways for developers to quickly assess image health and accelerate application security improvements across the software supply chain.
  • Container-based testing automation. In December 2023, we acquired AtomicJar, makers of Testcontainers, adding container-based testing automation to our portfolio. Testcontainers Cloud offers enterprise features and a scalable, cloud-based infrastructure that provides a consistent Testcontainers experience across the org and centralizes monitoring.
  • Powerful cloud-based builders. In January 2024, we launched Docker Build Cloud, combining powerful, native ARM & AMD cloud builders with shared cache that accelerates build times by up to 39x.
  • Security, control, and compliance for businesses. For our Docker Business subscribers, we’ve enhanced security and compliance features, ensuring that large teams can work securely and efficiently. Role-based access control (RBAC), SOC 2 Type 2 compliance, centralized management, and compliance reporting tools are just a few of the features that make Docker Business the best choice for enterprise-grade development environments. And soon, we are rolling out organizational access tokens to make developer access easier at the organizational level, enhancing security and efficiency.
  • Empowering developers to build AI applications. From introducing a new GenAI Stack to our extension for GitHub Copilot and our partnership with NVIDIA to our series of AI tips content, Docker is simplifying AI application development for our community. 

As we introduce new features and continue to provide — and improve on — the world’s largest container registry, the resources to do so also grow. With the rollout of our unified suites, we’re also updating our pricing to reflect the additional value. Here’s what’s changing at a high level: 

  • Docker Business pricing stays the same but gains the additional value and features announced today.
  • Docker Personal remains — and will always remain — free. This plan will continue to be improved upon as we work to grant access to a container-first approach to software development for all developers. 
  • Docker Pro will increase from $5/month to $9/month and Docker Team prices will increase from $9/user/month to $15/user/mo (annual discounts). Docker Business pricing remains the same.
  • We’re introducing image pull and storage limits for Docker Hub. This will impact less than 3% of accounts, the highest commercial consumers. For many of our Docker Team and Docker Business customers with Service Accounts, the new higher image pull limits will eliminate previously incurred fees.   
  • Docker Build Cloud minutes and Docker Scout analyzed repos are now included, providing enough minutes and repos to enhance the productivity of a development team throughout the day.  
  • Implementing consumption-based pricing for all integrated products, including Docker Hub, to provide flexibility and scalability beyond the plans.  

More value at every level

Our updated plans are packed with more features, higher usage limits, and simplified pricing, offering greater value at every tier. Our updated plans include: 

  • Docker Desktop: We’re expanding on Docker Desktop as the industry-leading container-first development solution with advanced security features, seamless cloud-native compatibility, and tools that accelerate development while supporting enterprise-grade administration.
  • Docker Hub: Docker subscriptions cover Hub essentials, such as private and public repo usage. To ensure that Docker Hub remains sustainable and continues to grow as the world’s largest container registry, we’re introducing consumption-based pricing for image pulls and storage. This update also includes enhanced usage monitoring tools, making it easier for customers to understand and manage usage.
View of the Usage dashboard
The Pulls Usage dashboard is now live on Docker Hub, allowing customers to see an organization’s Hub pull data.
  • Docker Build Cloud: We’ve removed the per-seat licenses for Build Cloud and increased the included build minutes for Pro, Team, and Business plans — enabling faster, more efficient builds across projects. Customers will have the option to add build minutes as their needs grow, but they will be surprised at how much time they save with our speedy builders. For customers using CI tools, Build Cloud’s speed can even help save on CI bills. 
  • Docker Scout: Docker Team and Docker Business plans will offer continuous vulnerability analysis for an unlimited number of Scout-enabled repositories. The integration of Docker Scout’s health scores into Docker Pro, Team, and Business plans helps customers maintain security and compliance with ease.
  • Testcontainers Cloud: Testcontainers Cloud helps customers streamline testing workflows, saving time and resources. We’ve removed the per-seat licenses for Testcontainers Cloud under the new plans and included cloud runtime minutes for Docker Pro, Docker Team, and Docker Business, available to use for Docker Desktop or in CI workflows. Customers will have the option to add runtime minutes as their needs grow.

Looking ahead

Docker continues to innovate and invest in our products, and Docker has been recognized most recently as developers’ most used, desired, and admired developer tool in the 2024 Stack Overflow Developer Survey.  

These updates are just the beginning of our ongoing commitment to providing developers with the best tools in the industry. As we continue to invest in our tools and technologies, development teams can expect even more enhancements that will empower them to achieve their development goals. 

New plans take effect starting November 15, 2024. The Docker Hub plan limits will take effect on Feb 1, 2025. No charges on Docker Hub image pulls or storage will be incurred between November 15, 2024, and January 31, 2025. For existing annual and month-to-month customers, these new plan entitlements will take effect at their next renewal date that occurs on or after November 15, 2024, giving them ample time to review and understand the new offerings. Learn more about the new Docker subscriptions and see a detailed breakdown of features in each plan. We’re committed to ensuring a smooth transition and are here to support customers every step of the way. 

Stay tuned for more updates or reach out to learn more. And as always, thank you for being a part of the Docker community. 


FAQ  

  1. I’m a Docker Business customer, what is new in my plan? 

Docker Business list pricing remains the same, but you will now have access to more of Docker’s products:  

  • Instead of paying an additional per-seat fee, Docker Build Cloud is now available to all users in your Docker plan. Learn how to use Build Cloud
  • Docker Build Cloud included minutes are increasing from 800/mo to 1500/mo. 
  • Docker Scout now includes unlimited repos with continuous vulnerability analysis, an increase from 3. Get started with Docker Scout quickstart
  • 1500 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.
  • Docker Hub image pull rate limits have been removed.
  • 1M Docker Hub pulls per month are included. 

If you require additional Build Cloud minutes, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing. See the pricing page for more details. 

  1. I’m a Docker Team customer, what is new in my plan? 

Docker Team will now include the following benefits:  

  • Instead of paying an additional per-seat fee, Docker Build Cloud is now available to all users in your Docker plan. Learn how to use Build Cloud
  • Docker Build Cloud minutes are increasing from 400/mo to 500/mo.
  • Docker Scout now includes unlimited repos with continuous vulnerability analysis, an increase from 3. Get started with Docker Scout quickstart
  • 500 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.  
  • Docker Hub image pull rate limits will be removed.
  • 100K Docker Hub pulls per month are included.
  • The minimum number of users is 1 (lowered from 5)

Docker Team price will increase from $9/user/month (annual) to $15/user/mo (annual) and from $11/user/month (monthly) to $16/user/month (monthly). If you require additional Build Cloud minutes, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing, or reach out to sales for invoice pricing. See the pricing page for more details. 

  1. I’m a Docker Pro customer, what is new in my plan? 

Docker Pro will now include: 

  • Docker Build Cloud minutes increased from 100/month to 200/month and no monthly fee. Learn how to use Build Cloud.
  • 2 included repos with continuous vulnerability analysis in Docker Scout. Get started with Docker Scout quickstart.  
  • 100 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.
  • Docker Hub image pull rate limits will be removed. 
  • 25K Docker Hub pulls per month are included.

Docker Pro plans will increase from $5/month (annual) to $9/month (annual) and from $7/month (monthly) to $11/month (monthly). If you require additional Build Cloud minutes, Docker Scout repos, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing. See the pricing page for more details. 

  1. I’m a Docker Personal user, what is included in my plan? 

Docker Personal plans remain free.

When you are logged into your account, you will see additional features and entitlements: 

  • 1 included repo with continuous vulnerability analysis in Docker Scout. Get started with Docker Scout quickstart.
  • Unlimited public Docker Hub repos. 
  • 1 private Docker Hub repo with 2GB storage. 
  • Updated Docker Hub image pull rate limit of 40 pulls/hr/user.

Unauthenticated users will be limited to 10 Docker Hub pulls/hr/IP address.  

Docker Personal users who want to start or continue using Docker Build Cloud may trial the service for seven days, or upgrade to a Docker Pro plan. Docker Personal users may trial Testcontainers Cloud for 30 days. 

  1. Where do I learn more about Docker Hub rate limits and storage changes? 

Check your plan’s details on the new plans overview page. For now, see the new Docker Hub Pulls Usage dashboard to understand your current usage.  

  1. When will new pricing go into effect? 

New pricing will go into effect on November 15, 2024, for all new customers. 

For all existing customers, new pricing will take effect on your next renewal date after November 15, 2024. When you renew, you will receive the benefits and entitlements of the new plans. Between now and your renewal date, your existing plan details will apply. 

  1. Can I keep my existing plan? 

If you are on an annual contract, you will keep your current plan and pricing until your next renewal date that falls after November 15, 2024. 

If you are a month-to-month customer, you may convert to an annual contract before November 14 to stay on your existing plan. You may choose between staying on your existing plan entitlements or the new comprehensive plans. After November 15, all month-to-month renewals will be on the new plans. 

  1. I have a regulatory constraint, is it possible to disable individual services? 

While most organizations will see reduced build times and improved supply chain security, some organizations may have constraints that prevent them from using all of Docker’s services. 

After November 15, the default configurations for Docker Desktop, Docker Hub, Docker Build Cloud, and Docker Scout are enabled for all users. The default configuration for Testcontainers Cloud is disabled. To change your organization’s configuration, the org owner or one of your org admins will be able to disable Docker Scout or Build Cloud in the admin console. 

  1. Can I get a refund on individual products I pay for today (Build Cloud, Scout repos, Testcontainers Cloud)? 

Your current plan will remain in effect until your first renewal date on or after November 15, 2024, for annual customers. At that time, your plan will automatically reflect your new entitlements for Docker Build Cloud and Docker Scout. If you are a current Testcontainers Cloud customer in addition to being a Docker Pro, Docker Team, or Docker Business customer, let your account manager know your org ID so that your included minutes can be applied starting November 15.  

  1.  How do I get more help? 

If you have additional questions not addressed in the FAQ, contact your Docker Account Executive or CSM.  

If you need help identifying those contacts or need technical assistance, contact support.

Zero Trust and Docker Desktop: An Introduction

Par : Jay Schmidt
13 août 2024 à 13:11

Today’s digital landscape is characterized by frequent security breaches resulting in lost revenue, potential legal liability, and loss of customer trust. The Zero Trust model was devised to improve an organization’s security posture and minimize the risk and scope of security breaches.

In this post, we explore Zero Trust security and walk through several strategies for implementing Zero Trust within a Docker Desktop-based development environment. Although this overview is not exhaustive, it offers a foundational perspective that organizations can build upon as they refine and implement their own security strategies.

2400x1260 security column 072024

What is Zero Trust security?

The Zero Trust security model assumes that no entity — inside or outside the network boundary — should be automatically trusted. This approach eliminates automatic trust and mandates rigorous verification of all requests and operations before granting access to resources. Zero Trust significantly enhances security measures by consistently requiring that trust be earned.

The principles and practices of Zero Trust are detailed by the National Institute of Standards and Technology (NIST) in Special Publication 800-207 — Zero Trust Architecture. This document serves as an authoritative guide, outlining the core tenets of Zero Trust, which include strict access control, minimal privileges, and continuous verification of all operational and environmental attributes. For example, Section 2.1 of this publication elaborates on the foundational principles that organizations can adopt to implement a robust Zero Trust environment tailored to their unique security needs. By referencing these guidelines, practitioners can gain a comprehensive understanding of Zero Trust, which aids in strategically implementing its principles across network architectures and strengthening an organization’s security posture.

As organizations transition toward containerized applications and cloud-based architectures, adopting Zero Trust is essential. These environments are marked by their dynamism, with container fleets scaling and evolving rapidly to meet business demands. Unlike traditional security models that rely on perimeter defenses, these modern infrastructures require a security strategy that supports continuous change while ensuring system stability. 

Integrating Zero Trust into the software development life cycle (SDLC) from the outset is crucial. Early adoption ensures that Zero Trust principles are not merely tacked on post-deployment but are embedded within the development process, providing a foundational security framework from the beginning.

Containers and Zero Trust

The isolation of applications and environments from each other via containerization helps with the implementation of Zero Trust by making it easier to apply access controls, apply more granular monitoring and detection rules, and audit the results.

As noted previously, these examples are specific to Docker Desktop, but you can apply the concepts to any container-based environment, including orchestration systems such as Kubernetes.

A solid foundation: Host and network

When applying Zero Trust principles to Docker Desktop, starting with the host system is important. This system should also meet Zero Trust requirements, such as using encrypted storage, limiting user privileges within the operating system, and enabling endpoint monitoring and logging. The host system’s attachment to network resources should require authentication, and all communications should be secured and encrypted.

Principle of least privilege

The principle of least privilege is a fundamental security approach stating that a user, program, or process should have only the minimum permissions necessary to perform its intended function and no more. In terms of working with containers, effectively implementing this principle requires using AppArmor/SELinux, using seccomp (secure computing mode) profiles, ensuring containers do not run as root, ensuring containers do not request or receive heightened privileges, and so on.

Hardened Docker Desktop (available with a Docker Business or Docker Government subscription), however, can satisfy this requirement through the Enhanced Container Isolation (ECI) setting. When active, ECI will do the following:

  • Running containers unprivileged: ECI ensures that even if a container is started with the --privileged flag, the actual processes inside the container do not have elevated privileges within the host or the Docker Desktop VM. This step is crucial for preventing privilege escalation attacks.
  • User namespace remapping: ECI uses a technique where the root user inside a container is mapped to a non-root user outside the container, in the Docker Desktop VM. This approach limits the potential damage and access scope even if a container is compromised.
  • Restricted file system access: Containers run under ECI have limited access to the file system of the host machine. This restriction prevents a compromised container from altering system files or accessing sensitive areas of the host file system.
  • Blocking sensitive system calls: ECI can block or filter system calls from containers that are typically used in attacks, such as certain types of mount operations, further reducing the risk of a breakout.
  • Isolation from the Docker Engine: ECI prevents containers from interacting directly with the Docker Engine’s API unless explicitly allowed, protecting against attacks that target the Docker infrastructure itself.

Network microsegmentation

Microsegmentation offers a way to enhance security further by controlling traffic flow among containers. Through the implementation of stringent network policies, only authorized containers are allowed to interact, which significantly reduces the risk of lateral movement in case of a security breach. For example, a payment processing container may only accept connections from specific parts of the application, isolating it from other, less secure network segments.

The concept of microsegmentation also plays a role for AI systems and workloads. By segmenting networks and data, organizations can apply controls to different parts of their AI infrastructure, effectively isolating the environments used for training, testing, and production. This isolation helps reduce the list of data leakage between environments and can help reduce the blast radius of a security breach.

Docker Desktop’s robust networking provides several ways to address microsegmentation. By leveraging the bridge network for creating isolated networks within the same host or using the Macvlan network driver that allows containers to be treated as physical devices with distinct MAC addresses, administrators can define precise communication paths that align with the least privileged access principles of Zero Trust. Additionally, Docker Compose can easily manage and configure these networks, specifying which containers can communicate on predefined networks. 

This setup facilitates fine-grained network policies at the infrastructure level. It also simplifies the management of container access, ensuring that strict network segmentation policies are enforced to minimize the attack surface and reduce the risk of unauthorized access in containerized environments. Additionally, Docker Desktop supports third-party network drivers, which can also be used to address this concern.

For use cases where Docker Desktop requires containers to have different egress rules than the host, “air-gapped containers” allow for the configuration of granular rules applied to containers. For example, containers can be completely restricted from the internet but allowed on the local network, or they could be proxied/firewalled to a small set of approved hosts.

Note that in Kubernetes, this type of microsegmentation and network traffic management is usually managed by a service mesh.

Authentication and authorization

Implementing strong authentication and role-based access control (RBAC) is crucial in a Docker-based Zero Trust environment. These principles need to be addressed in several different areas, starting with the host and network as noted above.

Single Sign On (SSO) and System for Cross-Domain Identity Management (SCIM) should be enabled and used to manage user authentication to the Docker SaaS. These tools allow for better management of users, including the use of groups to enforce role and team membership at the account level. Additionally, Docker Desktop should be configured to require and enforce login to the Docker organization in use, which prevents users from logging into any other organizations or personal accounts.

When designing, deploying, building, and testing containers locally under Docker Desktop, implementing robust authentication and authorization mechanisms is crucial to align with security best practices and principles. It’s essential to enforce strict access controls at each stage of the container lifecycle.

This approach starts with managing registry and image access, to ensure only approved images are brought into the development process. This can be accomplished by using an internal registry and enforcing firewall rules that block access to other registries. However, an easier approach is to use Registry Access Management (RAM) and Image Access Management (IAM) — features provided by Hardened Docker Desktop — to control images and registries.

The implementation of policies and procedures around secrets management — such as using a purpose-designed secrets store — should be part of the development process. Finally, using Enhanced Container Isolation (as described above) will help ensure that container privileges are managed consistently with best practices.

This comprehensive approach not only strengthens security but also helps maintain the integrity and confidentiality of the development environment, especially when dealing with sensitive or proprietary application data.

Monitoring and auditing

Continuous monitoring and auditing of activities within the Docker environment are vital for early detection of potential security issues. These controls build on the areas identified above by allowing for the auditing and monitoring of the impact of these controls.

Docker Desktop produces a number of logs that provide insight into the operations of the entire application platform. This includes information about the local environment, the internal VM, the image store, container runtime, and more. This data can be redirected and parsed/analyzed by industry standard tooling.

Container logging is important and should be sent to a remote log aggregator for processing. Because the best development approaches require that log formats and log levels from development mirror those used in production, this data can be used not only to look for anomalies in the development process but also to provide operations teams with an idea of what production will look like.

Docker Scout

Ensuring containerized applications comply with security and privacy policies is another key part of continuous monitoring. Docker Scout is designed from the ground up to support this effort. 

Docker Scout starts with the image software bill of materials (SBOM) and continually checks against known and emerging CVEs and security policies. These policies can include detecting high-profile CVEs to be mitigated, validating that approved base images are used, verifying that only valid licenses are being used, and ensuring that a non-root user is defined in the image. Beyond that, the Docker Scout policy engine can be used to write custom policies using the wide array of data points available.  

Immutable containers

The concept of immutable containers, which are not altered after they are deployed, plays a significant role in securing environments. By ensuring that containers are replaced rather than changed, the security of the environment is enhanced, preventing unauthorized or malicious alterations during runtime.

Docker images — more broadly, OCI-compliant images — are immutable by default. When they are deployed as containers, they become writable while they are running via the addition of a “scratch layer” on top of the immutable image. Note that this layer does not persist beyond the life of the container. When the container is removed, the scratch layer is removed as well.

When the immutable flag is added — either by adding the --read-only flag to the docker run command or by adding the read_only: true key value pair in docker compose — Docker will mount the root file system read-only, which prevents writes to the container file system.

In addition to making a container immutable, it is possible to mount Docker volumes as read/write or read-only. Note that you can make the container’s root file system read-only and then use a volume read/write to better manage write access for your container.

Encryption

Ensuring that data is securely encrypted, both in transit and at rest, is non-negotiable in a secure Docker environment. Docker containers should be configured to use TLS for communications both between containers and outside the container environment. Docker images and volumes are stored locally and can benefit from the host system’s disk encryption when they are at rest.

Tool chain updates

Finally, it is important to make sure that Docker Desktop is updated to the most current version, as the Docker team is continually making improvements and mitigating CVEs as they are discovered. For more information, refer to Docker security documentation and Docker security announcements.

Overcoming challenges in Zero Trust adoption

Implementing a Zero Trust architecture with Docker Desktop is not without its challenges. Such challenges include the complexity of managing such environments, potential performance overhead, and the need for a cultural shift within organizations towards enhanced security awareness. However, the benefits of a secure, resilient infrastructure far outweigh these challenges, making the effort and investment in Zero Trust worthwhile.

Conclusion

Incorporating Zero Trust principles into Docker Desktop environments is essential for protecting modern infrastructures against sophisticated cyber threats. By understanding and implementing these principles, organizations can safeguard their applications and data more effectively, ensuring a secure and resilient digital presence.

Learn more

Docker Best Practices: Understanding the Differences Between ADD and COPY Instructions in Dockerfiles

Par : Jay Schmidt
8 août 2024 à 13:30

COPY vs. ADD tl;dr:

When you search for “Dockerfile best practices,” one of the suggestions you will find is that you always use the COPY instruction instead of the ADD instruction when adding files into your Docker image.

This blog post will explore why this suggestion exists by providing additional detail on the functionality of these two instructions. Once you understand these concepts, you may find scenarios where you can benefit from ignoring the suggestion and using the ADD command instead of COPY.

2400x1260 understanding the differences between add and copy instructions in dockerfiles

Understanding file system build context

Before diving into the differences between ADD and COPY, it’s important to understand the concept of build context. The build context is the set of files and directories that are accessible to the Docker engine when building an image. When you run a docker build command, Docker sends the content of the specified context directory (and its subdirectories) to the Docker daemon. This context forms the scope within which the COPY and ADD instructions operate.

COPY instruction

The COPY instruction is straightforward and does exactly what its name implies: It copies files and directories from a source within the build context to a destination layer in the Docker image. This instruction can be used to copy both files and directories, and all paths on the host are relative to the root of the build context.

Syntax:

COPY <src>... <dest>
  • <src>: The source files or directories on the host.
  • <dest>: The destination path inside the Docker image.

Key points

  • Basic functionality: COPY only supports copying files and directories from the host file system. It does not support URLs or automatic unpacking of compressed files.
  • Security: Because COPY only handles local files, it tends to be more predictable and secure than ADD, reducing the risk of unintentionally introducing files from external sources.
  • Use case: Best used when you need to include files from your local build context into the Docker image without any additional processing.

Example:

COPY ./app /usr/src/app
COPY requirements.txt /usr/src/app/

In this example, the contexts of the local app directory are copied into the /usr/src/app directory inside the Docker image being built. The second command copies the requirements.txt file into the /usr/src/app directory as well.

ADD instruction

The ADD instruction provides the same functionality that the COPY instruction does, but it also has additional functionality that, if misunderstood, can introduce complexity and potential security risks.

Syntax:

ADD <src>... <dest>
  • <src>: The source files (directories or URLs).
  • <dest>: The destination path inside the Docker image.

Key points

  • Extended functionality: In addition to copying local files and directories from the build context, ADD provides the following advanced functionality:
    • Handle URLs: When supplied as a source, the file referenced by a URL will be downloaded to the current Docker image layer at the supplied destination path.
    • Extract archives: When supplied as a source, ADD will automatically unpack and expand archives to the current Docker image layer at the supplied destination path.
  • Flexibility vs. security: Although ADD is more flexible, it does introduce risk. Downloading external URLs into the build process may allow malicious code or contents to be brought into the process. Using ADD with archives may result in unintended consequences if you do not understand how it handles archives.
  • Use case: ADD should only be used when you need specific functionality that it provides and are willing to manage the potential security issues arising from this usage.

Example:

ADD https://example.com/file.tar.gz /usr/src/app/
ADD my-archive.tar.gz /usr/src/app/

In this example, the build process first downloads https://example.com/file.tar.gz and extracts the contents into /usr/src/app in the Docker image layer. In the next step, it takes the local file my-archive.tar.gz and extracts it into the Docker image layer under /usr/src/app.

When to use COPY vs. ADD

  • For most use cases, COPY is the better choice due to its simplicity and security. This instruction allows you to transfer files and directories from your local context into the Docker image you are building.
  • Use ADD only when you need the additional capabilities it offers, but be mindful of potential security implications.

Remote contexts

In addition to traditional file system contexts, Docker also supports remote contexts, which can be particularly useful in cloud environments or for building images from code repositories directly. These include:

  • Git repositories: You can specify a Git repository URL as the build context, allowing Docker to clone the repository and use its content as the context.
docker build https://github.com/username/repository.git#branch
  • Remote URLs: Docker can use remote URLs for the build context. This is useful for building images directly from archives available online.
docker build http://example.com/context.tar.gz
  • OCI images: You can use an OCI image as the build context, which is useful for using pre-built images as the starting point for new builds.
docker build oci://registry.example.com/image:tag

How ADD and COPY behave in remote contexts

Note that both ADD and COPY behave slightly differently when used in a remote context.

Using COPY with remote contexts

COPY still operates within the scope of the build context, and can copy files and directories from the cloned repository into the Docker image. For example, when using a Git repository as the build context, COPY can copy files and directories from the cloned repository into the Docker image. It does not support copying files from URLs or other remote sources directly.

Example with Git repository as build context:

# Using a Git repository as build context
COPY ./src /app/src

In this case, COPY will copy the src directory from the Git repository (the build context) to /app/src in the Docker image.

Example with URL build context:

# Using an archive from a URL
COPY ./src /app/src

In this case, COPY will copy the src directory from the extracted archive (the build context) to /app/src in the Docker image.

Example with OCI image as build context:

# Using an OCI image as build context
COPY /path/in/oci/image /app/path

In this case, COPY will copy the contents from the specified path within the OCI image to the specified destination path in the Docker image.

Using ADD with remote contexts

The ADD instruction can still be used to download files and extract archives as well as copy files from the build context. Note that all the caveats provided about the ADD instruction above apply here as well.

Example with Git repository as build context:

# Using a Git repository as build context
ADD https://example.com/data.tar.gz /data
ADD ./src /app/src

In this example, ADD will download and extract data.tar.gz from the URL into the /data directory in the Docker image. It will also copy the src directory from the Git repository (the build context) to /app/src in the Docker image.

Example with URL build context:

# Using an archive from a URL
ADD https://example.com/data.tar.gz /data
ADD ./src /app/src

In this example, ADD will download and extract data.tar.gz from the URL into the /data directory in the Docker image. It will also copy the src directory from the downloaded/unpacked URL (the build context) to /app/src in the Docker image.

Example with OCI image as build context:

# Using an OCI image as build context
ADD https://example.com/data.tar.gz /data
ADD /path/in/oci/image /app/path

In this scenario, ADD will download and extract data.tar.gz from the URL into the /data directory in the Docker image. It will also copy the contents from the specified path within the OCI image to the specified destination path in the Docker image.

COPY vs. ADD tl;dr:

  • Prefer COPY: For most use cases, COPY is the better choice due to its simplicity and security. Use it to transfer files and directories from your local context or a remote context like a Git repository to the Docker image.
  • Use ADD with caution: Opt for ADD only when you need its additional functionalities, like downloading files from URLs or automatically extracting archives (Figure 1). Always be mindful of the potential security implications when using ADD.
Diagram showing concepts that are also explained in the blog post.

Conclusion

Understanding the differences between ADD and COPY instructions in Dockerfiles and how they can be affected by build context can help you build more efficient and secure Docker images. Although COPY offers a straightforward way to include local files, ADD provides additional flexibility with the cost of increased complexity and potential security risks.

Learn more

Docker Best Practices: Choosing Between RUN, CMD, and ENTRYPOINT

Par : Jay Schmidt
15 juillet 2024 à 14:00

Docker’s flexibility and robustness as a containerization tool come with a complexity that can be daunting. Multiple methods are available to accomplish similar tasks, and users must understand the pros and cons of the available options to choose the best approach for their projects.

One confusing area concerns the RUN, CMD, and ENTRYPOINT Dockerfile instructions. In this article, we will discuss the differences between these instructions and describe use cases for each.

2400x1260 choosing between run cmd and entrypoint

RUN

The RUN instruction is used in Dockerfiles to execute commands that build and configure the Docker image. These commands are executed during the image build process, and each RUN instruction creates a new layer in the Docker image. For example, if you create an image that requires specific software or libraries installed, you would use RUN to execute the necessary installation commands.

The following example shows how to instruct the Docker build process to update the apt cache and install Apache during an image build:

RUN apt update && apt -y install apache2

RUN instructions should be used judiciously to keep the image layers to a minimum, combining related commands into a single RUN instruction where possible to reduce image size.

CMD

The CMD instruction specifies the default command to run when a container is started from the Docker image. If no command is specified during the container startup (i.e., in the docker run command), this default is used. CMD can be overridden by supplying command-line arguments to docker run.

CMD is useful for setting default commands and easily overridden parameters. It is often used in images as a way of defining default run parameters and can be overridden from the command line when the container is run. 

For example, by default, you might want a web server to start, but users could override this to run a shell instead:

CMD ["apache2ctl", "-DFOREGROUND"]

Users can start the container with docker run -it <image> /bin/bash to get a Bash shell instead of starting Apache.  

ENTRYPOINT

The ENTRYPOINT instruction sets the default executable for the container. It is similar to CMD but is overridden by the command-line arguments passed to docker run. Instead, any command-line arguments are appended to the ENTRYPOINT command.

Note: Use ENTRYPOINT when you need your container to always run the same base command, and you want to allow users to append additional commands at the end. 

ENTRYPOINT is particularly useful for turning a container into a standalone executable. For example, suppose you are packaging a custom script that requires arguments (e.g., “my_script extra_args”). In that case, you can use ENTRYPOINT to always run the script process (“my_script”) and then allow the image users to specify the “extra_args” on the docker run command line. You can do the following:

ENTRYPOINT ["my_script"]

Combining CMD and ENTRYPOINT

The CMD instruction can be used to provide default arguments to an ENTRYPOINT if it is specified in the exec form. This setup allows the entry point to be the main executable and CMD to specify additional arguments that can be overridden by the user.

For example, you might have a container that runs a Python application where you always want to use the same application file but allow users to specify different command-line arguments:

ENTRYPOINT ["python", "/app/my_script.py"]
CMD ["--default-arg"]

Running docker run myimage --user-arg executes python /app/my_script.py --user-arg.

The following table provides an overview of these commands and use cases.

Command description and use cases

CommandDescriptionUse Case
CMDDefines the default executable of a Docker image. It can be overridden by docker run arguments.Utility images allow users to pass different executables and arguments on the command line.
ENTRYPOINTDefines the default executable. It can be overridden by the “--entrypoint”  docker run arguments.Images built for a specific purpose where overriding the default executable is not desired.
RUNExecutes commands to build layers.Building an image

What is PID 1 and why does it matter?

In the context of Unix and Unix-like systems, including Docker containers, PID 1 refers to the first process started during system boot. All other processes are then started by PID 1, which in the process tree model is the parent of every process in the system. 

In Docker containers, the process that runs as PID 1 is crucial, because it is responsible for managing all other processes inside the container. Additionally, PID 1 is the process that reviews and handles signals from the Docker host. For example, a SIGTERM into the container will be caught and processed by PID 1, and the container should gracefully shut down.

When commands are executed in Docker using the shell form, a shell process (/bin/sh -c) typically becomes PID 1. Still, it does not properly handle these signals, potentially leading to unclean shutdowns of the container. In contrast, when using the exec form, the command runs directly as PID 1 without involving a shell, which allows it to receive and handle signals directly. 

This behavior ensures that the container can gracefully stop, restart, or handle interruptions, making the exec form preferable for applications that require robust and responsive signal handling.

Shell and exec forms

In the previous examples, we used two ways to pass arguments to the RUN, CMD, and ENTRYPOINT instructions. These are referred to as shell form and exec form. 

Note: The key visual difference is that the exec form is passed as a comma-delimited array of commands and arguments with one argument/command per element. Conversely, shell form is expressed as a string combining commands and arguments. 

Each form has implications for executing commands within containers, influencing everything from signal handling to environment variable expansion. The following table provides a quick reference guide for the different forms.

Shell and exec form reference

FormDescriptionExample
Shell FormTakes the form of <INSTRUCTION> <COMMAND>.CMD echo TEST or ENTRYPOINT echo TEST
Exec FormTakes the form of <INSTRUCTION> ["EXECUTABLE", "PARAMETER"].CMD ["echo", "TEST"] or ENTRYPOINT ["echo", "TEST"]

In the shell form, the command is run in a subshell, typically /bin/sh -c on Linux systems. This form is useful because it allows shell processing (like variable expansion, wildcards, etc.), making it more flexible for certain types of commands (see this shell scripting article for examples of shell processing). However, it also means that the process running your command isn’t the container’s PID 1, which can lead to issues with signal handling because signals sent by Docker (like SIGTERM for graceful shutdowns) are received by the shell rather than the intended process.

The exec form does not invoke a command shell. This means the command you specify is executed directly as the container’s PID 1, which is important for correctly handling signals sent to the container. Additionally, this form does not perform shell expansions, so it’s more secure and predictable, especially for specifying arguments or commands from external sources.

Putting it all together

To illustrate the practical application and nuances of Docker’s RUN, CMD, and ENTRYPOINT instructions, along with the choice between shell and exec forms, let’s review some examples. These examples demonstrate how each instruction can be utilized effectively in real-world Dockerfile scenarios, highlighting the differences between shell and exec forms. 

Through these examples, you’ll better understand when and how to use each directive to tailor container behavior precisely to your needs, ensuring proper configuration, security, and performance of your Docker containers. This hands-on approach will help consolidate the theoretical knowledge we’ve discussed into actionable insights that can be directly applied to your Docker projects.

RUN instruction

For RUN, used during the Docker build process to install packages or modify files, choosing between shell and exec form can depend on the need for shell processing. The shell form is necessary for commands that require shell functionality, such as pipelines or file globbing. However, the exec form is preferable for straightforward commands without shell features, as it reduces complexity and potential errors.

# Shell form, useful for complex scripting
RUN apt-get update && apt-get install -y nginx

# Exec form, for direct command execution
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "nginx"]

CMD and ENTRYPOINT

These instructions control container runtime behavior. Using exec form with ENTRYPOINT ensures that the container’s main application handles signals directly, which is crucial for proper startup and shutdown behavior.  CMD can provide default parameters to an ENTRYPOINT defined in exec form, offering flexibility and robust signal handling.

# ENTRYPOINT with exec form for direct process control
ENTRYPOINT ["httpd"]

# CMD provides default parameters, can be overridden at runtime
CMD ["-D", "FOREGROUND"]

Signal handling and flexibility

Using ENTRYPOINT in exec form and CMD to specify parameters ensures that Docker containers can handle operating system signals gracefully, respond to user inputs dynamically, and maintain secure and predictable operations. 

This setup is particularly beneficial for containers that run critical applications needing reliable shutdown and configuration behaviors. The following table shows key differences between the forms.

Key differences between shell and exec

Shell FormExec Form
FormCommands without [] brackets. Run by the container’s shell, e.g., /bin/sh -c.Commands with [] brackets. Run directly, not through a shell.
Variable SubstitutionInherits environment variables from the shell, such as $HOME and $PATH.Does not inherit shell environment variables but behaves the same for ENV instruction variables.
Shell FeaturesSupports sub-commands, piping output, chaining commands, I/O redirection, etc.Does not support shell features.
Signal Trapping & ForwardingMost shells do not forward process signals to child processes.Directly traps and forwards signals like SIGINT.
Usage with ENTRYPOINTCan cause issues with signal forwarding.Recommended due to better signal handling.
CMD as ENTRYPOINT ParametersNot possible with the shell form.If the first item in the array is not a command, all items are used as parameters for the ENTRYPOINT.

Figure 1 provides a decision tree for using RUN, CMD, and ENTRYPOINT in building a Dockerfile.

2400x1260 run cmd entrypoint
Figure 1: Decision tree — RUN, CMD, ENTRYPOINT.

Figure 2 shows a decision tree to help determine when to use exec form or shell form.

2400x1260 decision tree exec vs shell
Figure 2: Decision tree — exec vs. shell form.

Examples

The following section will walk through the high-level differences between CMD and ENTRYPOINT. In these examples, the RUN command is not included, given that the only decision to make there is easily handled by reviewing the two different formats.

Test Dockerfile

# Use syntax version 1.3-labs for Dockerfile
# syntax=docker/dockerfile:1.3-labs

# Use the Ubuntu 20.04 image as the base image
FROM ubuntu:20.04

# Run the following commands inside the container:
# 1. Update the package lists for upgrades and new package installations
# 2. Install the apache2-utils package (which includes the 'ab' tool)
# 3. Remove the package lists to reduce the image size
#
# This is all run in a HEREDOC; see
# https://www.docker.com/blog/introduction-to-heredocs-in-dockerfiles/
# for more details.
#
RUN <<EOF
apt-get update;
apt-get install -y apache2-utils;
rm -rf /var/lib/apt/lists/*;
EOF

# Set the default command
CMD ab

First build

We will build this image and tag it as ab.

$ docker build -t ab .

[+] Building 7.0s (6/6) FINISHED                                                               docker:desktop-linux
 => [internal] load .dockerignore                                                                              0.0s
 => => transferring context: 2B                                                                                0.0s
 => [internal] load build definition from Dockerfile                                                           0.0s
 => => transferring dockerfile: 730B                                                                           0.0s
 => [internal] load metadata for docker.io/library/ubuntu:20.04                                                0.4s
 => CACHED [1/2] FROM docker.io/library/ubuntu:20.04@sha256:33a5cc25d22c45900796a1aca487ad7a7cb09f09ea00b779e  0.0s
 => [2/2] RUN <<EOF (apt-get update;...)                                                                       6.5s
 => exporting to image                                                                                         0.0s
 => => exporting layers                                                                                        0.0s
 => => writing image sha256:99ca34fac6a38b79aefd859540f88e309ca759aad0d7ad066c4931356881e518                   0.0s
 => => naming to docker.io/library/ab 

Run with CMD ab

Without any arguments, we get a usage block as expected.

$ docker run ab
ab: wrong number of arguments
Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
    -n requests     Number of requests to perform
    -c concurrency  Number of multiple requests to make at a time
    -t timelimit    Seconds to max. to spend on benchmarking
                    This implies -n 50000
    -s timeout      Seconds to max. wait for each response
                    Default is 30 seconds
<-- SNIP -->

However, if I run ab and include a URL to test, I initially get an error:

$ docker run --rm ab https://jayschmidt.us
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "https://jayschmidt.us": stat https://jayschmidt.us: no such file or directory: unknown.

The issue here is that the string supplied on the command line — https://jayschmidt.us — is overriding the CMD instruction, and that is not a valid command, resulting in an error being thrown. So, we need to specify the command to run:

$ docker run --rm  ab ab https://jayschmidt.us/
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking jayschmidt.us (be patient).....done


Server Software:        nginx
Server Hostname:        jayschmidt.us
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-ECDSA-AES256-GCM-SHA384,256,256
Server Temp Key:        X25519 253 bits
TLS Server Name:        jayschmidt.us

Document Path:          /
Document Length:        12992 bytes

Concurrency Level:      1
Time taken for tests:   0.132 seconds
Complete requests:      1
Failed requests:        0
Total transferred:      13236 bytes
HTML transferred:       12992 bytes
Requests per second:    7.56 [#/sec] (mean)
Time per request:       132.270 [ms] (mean)
Time per request:       132.270 [ms] (mean, across all concurrent requests)
Transfer rate:          97.72 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       90   90   0.0     90      90
Processing:    43   43   0.0     43      43
Waiting:       43   43   0.0     43      43
Total:        132  132   0.0    132     132

Run with ENTRYPOINT

In this run, we remove the CMD ab instruction from the Dockerfile, replace it with ENTRYPOINT ["ab"], and then rebuild the image.

This is similar to but different from the CMD command — when you use ENTRYPOINT, you cannot override the command unless you use the –entrypoint flag on the docker run command. Instead, any arguments passed to docker run are treated as arguments to the ENTRYPOINT.

$ docker run --rm  ab "https://jayschmidt.us/"
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking jayschmidt.us (be patient).....done


Server Software:        nginx
Server Hostname:        jayschmidt.us
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-ECDSA-AES256-GCM-SHA384,256,256
Server Temp Key:        X25519 253 bits
TLS Server Name:        jayschmidt.us

Document Path:          /
Document Length:        12992 bytes

Concurrency Level:      1
Time taken for tests:   0.122 seconds
Complete requests:      1
Failed requests:        0
Total transferred:      13236 bytes
HTML transferred:       12992 bytes
Requests per second:    8.22 [#/sec] (mean)
Time per request:       121.709 [ms] (mean)
Time per request:       121.709 [ms] (mean, across all concurrent requests)
Transfer rate:          106.20 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       91   91   0.0     91      91
Processing:    31   31   0.0     31      31
Waiting:       31   31   0.0     31      31
Total:        122  122   0.0    122     122

What about syntax?

In the example above, we use ENTRYPOINT ["ab"] syntax to wrap the command we want to run in square brackets and quotes. However, it is possible to specify ENTRYPOINT ab (without quotes or brackets). 

Let’s see what happens when we try that.

$ docker run --rm  ab "https://jayschmidt.us/"
ab: wrong number of arguments
Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
    -n requests     Number of requests to perform
    -c concurrency  Number of multiple requests to make at a time
    -t timelimit    Seconds to max. to spend on benchmarking
                    This implies -n 50000
    -s timeout      Seconds to max. wait for each response
                    Default is 30 seconds
<-- SNIP -->

Your first thought will likely be to re-run the docker run command as we did for CMD ab above, which is giving both the executable and the argument:

$ docker run --rm ab ab "https://jayschmidt.us/"
ab: wrong number of arguments
Usage: ab [options] [http[s]://]hostname[:port]/path
Options are:
    -n requests     Number of requests to perform
    -c concurrency  Number of multiple requests to make at a time
    -t timelimit    Seconds to max. to spend on benchmarking
                    This implies -n 50000
    -s timeout      Seconds to max. wait for each response
                    Default is 30 seconds
<-- SNIP -->

This is because ENTRYPOINT can only be overridden if you explicitly add the –entrypoint argument to the docker run command. The takeaway is to always use ENTRYPOINT when you want to force the use of a given executable in the container when it is run.

Wrapping up: Key takeaways and best practices

The decision-making process involving the use of RUN, CMD, and ENTRYPOINT, along with the choice between shell and exec forms, showcases Docker’s intricate nature. Each command serves a distinct purpose in the Docker ecosystem, impacting how containers are built, operate, and interact with their environments. 

By selecting the right command and form for each specific scenario, developers can construct Docker images that are more reliable, secure, and optimized for efficiency. This level of understanding and application of Docker’s commands and their formats is crucial for fully harnessing Docker’s capabilities. Implementing these best practices ensures that applications deployed in Docker containers achieve maximum performance across various settings, enhancing development workflows and production deployments.

Learn more

Mastering Kubernetes Testing with Kyverno Chainsaw!

8 avril 2024 à 16:08

Dive deep into the world of Kubernetes and discover the best practices for testing your resources with precision and confidence. In this tutorial, we focus on ensuring your Kubernetes deployments, services, and entire cluster configurations stand up to the highest standards of quality and reliability.

Get ready for a review and hands-on walkthrough on utilizing Kyverno Chainsaw to test your Kubernetes resources.

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands: https://gist.github.com/vfarcic/c3053a9639f3ce9ab344ad6addc45a6c
🔗 Chainsaw: https://kyverno.github.io/chainsaw
🎬 Kubernetes Testing Techniques with KUTTL: https://youtu.be/ZSTQQNu4laY
🎬 Say Goodbye to Makefile – Use Taskfile to Manage Tasks in CI/CD Pipelines and Locally: https://youtu.be/Z7EnwBaJzCk

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)

▬▬▬▬▬▬ 🚀 Livestreams & podcasts 🚀 ▬▬▬▬▬▬
🎤 Podcast: https://www.devopsparadox.com/
💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Follow me on Twitter: https://twitter.com/vfarcic
➡ Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/

Nix for Everyone: Unleash Devbox for Simplified Development

1 avril 2024 à 16:09

Simplify your development game with Devbox acting as a simplification layer on top of Nix. Use it to install all the tools required to work on a project and create ephemeral environments that can run as local Shells, remotely as DevContains for GitHub CodeSpaces or DevPod, in Docker containers, or in CI/CD pipelines.

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands: https://gist.github.com/vfarcic/96b5b06a95b1db5f9a8939a8fa850827
🔗 Devbox: https://www.jetpack.io/devbox
🎬 Say Goodbye to Containers – Ephemeral Environments with Nix Shell: https://youtu.be/0ulldVwZiKA
🎬 Say Goodbye to Makefile – Use Taskfile to Manage Tasks in CI/CD Pipelines and Locally: https://youtu.be/Z7EnwBaJzCk

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)

▬▬▬▬▬▬ 🚀 Livestreams & podcasts 🚀 ▬▬▬▬▬▬
🎤 Podcast: https://www.devopsparadox.com/
💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Follow me on Twitter: https://twitter.com/vfarcic
➡ Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/

How to Propagate Secrets Everywhere with External Secrets Operator (ESO) and Crossplane

18 mars 2024 à 16:07

We dive into the powerful synergy between External Secrets Operator (ESO) and Crossplane to efficiently manage and propagate secrets across your Kubernetes clusters, databases, and secrets managers. Learn how to securely and seamlessly integrate with cloud providers’ secret management systems using ESO, and see how to leverage Crossplane’s infrastructure as code capabilities to ensure your secrets are consistently deployed wherever they’re needed.

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands: https://gist.github.com/vfarcic/216c589df4b4a8976ad48f6a79f95158
🔗 External Secrets Operator (ESO): https://external-secrets.io
🎬 Manage Kubernetes Secrets With External Secrets Operator (ESO): https://youtu.be/SyRZe5YVCVk
🎬 Crossplane – GitOps-based Infrastructure as Code through Kubernetes API: https://youtu.be/n8KjVmuHm7A
🎬 How To Shift Left Infrastructure Management Using Crossplane Compositions: https://youtu.be/AtbS1u2j7po
🎬 Crossplane Composition Functions: Unleashing the Full Potential: https://youtu.be/jjtpEhvwgMw
🎬 OpenFunction: The Best Way to Run Serverless Functions on Kubernetes?: https://youtu.be/UGysOX84v2c
🔗 Kubernetes Compositions: https://github.com/vfarcic/crossplane-kubernetes/tree/main/package
🔗 SQL Compositions: https://github.com/vfarcic/crossplane-sql/tree/main/package
🎬 Kubernetes Deployment Order and Dependencies Demystified: https://youtu.be/4-WpJ49MDG8
🎬 Argo CD Synchronization is BROKEN! It Should Switch to Eventual Consistency!: https://youtu.be/t1Fdse-F9Jw

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)

▬▬▬▬▬▬ 🚀 Livestreams & podcasts 🚀 ▬▬▬▬▬▬
🎤 Podcast: https://www.devopsparadox.com/
💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Follow me on Twitter: https://twitter.com/vfarcic
➡ Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/

Dagger: The Missing Ingredient for Your Disastrous CI/CD Pipeline

18 décembre 2023 à 16:11

In this video, we will take a look at some of the common mistakes that people make when building CI/CD pipelines, and we will show you how Dagger can help you to avoid these mistakes.

Dagger is a set of libraries enable us to write CI (not CI/CD) pipelines in a variety of languages (NodeJS, Python, Elixir, etc.), that can run anywhere (locally, remotely, in other pipeline tools), and that is based on Docker or other container runtimes.

It replaces many of the tasks we normally write in Jenkins, GitHub Actions, Argo Workflows, Tekton, CircleCI, and other remote pipeline solutions.

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬

➡ Gist with the commands: manuscript/pipelines/dagger.sh
🔗 Dagger: https://dagger.io
🎬 Your CI/CD Pipelines Are Wrong – From Monoliths To Events: https://youtu.be/TSQ0QpfCi1c
🎬 Is CUE The Perfect Language For Kubernetes Manifests (Helm Templates Replacement)?: https://youtu.be/m6g0aWggdUQ
🎬 Is Timoni With CUE a Helm Replacement?: https://youtu.be/bbE1BFCs548

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)

▬▬▬▬▬▬ 🚀 Livestreams & podcasts 🚀 ▬▬▬▬▬▬
🎤 Podcast: https://www.devopsparadox.com/
💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Follow me on Twitter: https://twitter.com/vfarcic
➡ Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/

Acorn: Build and Deploy Cloud-Native Applications More Easily and Efficiently

11 décembre 2023 à 16:08

Acorn is a development platform that makes it easy to build and deploy cloud-native applications in Kubernetes or as a SaaS service. It uses CUE-based DSL as its configuration language, which makes it easy to define, manage, and deploy applications using a single, declarative language.

In this video, we will show you how to use Acorn to build and deploy a cloud-native application and discuss the pros and cons in an attempt to discover whether Acorn might be a good choice.

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands: https://gist.github.com/vfarcic/9f2d4f5952f03842897ce6253d1d1ab5
🔗 Acorn: https://acorn.io
🎬 Is Timoni With CUE a Helm Replacement?: https://youtu.be/bbE1BFCs548
🎬 Is CUE The Perfect Language For Kubernetes Manifests (Helm Templates Replacement)?: https://youtu.be/m6g0aWggdUQ
🎬 How To Inspect, Plan, Migrate DB Schemas With Atlas: https://youtu.be/JLvHpXJ1hHk
🎬 Kubernetes? Database Schema? Schema Management with Atlas Operator: https://youtu.be/1iZoEFzlvhM
🎬 OpenFunction: The Best Way to Run Serverless Functions on Kubernetes?: https://youtu.be/UGysOX84v2c

▬▬▬▬▬▬ 💰 Sponsorships 💰 ▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)

▬▬▬▬▬▬ 🚀 Livestreams & podcasts 🚀 ▬▬▬▬▬▬
🎤 Podcast: https://www.devopsparadox.com/
💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Follow me on Twitter: https://twitter.com/vfarcic
➡ Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/

Stop Giving Permanent Access To Anyone: Just-in-Time with Apono

11 septembre 2023 à 14:34

Granting permanent access to anyone for anything is dangerous and unnecessary. Instead, we should be using the just-in-time approach, and Apono might be just the solution for that. Enhance security, prevent breaches, and empower us to control access more effectively while easily giving temporary access to whoever needs it with Apono.

▬▬▬▬▬▬ 🔗 Additional Info 🔗 ▬▬▬▬▬▬
➡ Gist with the commands: https://gist.github.com/vfarcic/557b6cfcf0655a78e6f2a146564e4861

▬▬▬▬▬▬ 💰 Sponsoships 💰 ▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)

▬▬▬▬▬▬ 🚀 Livestreams & podcasts 🚀 ▬▬▬▬▬▬
🎤 Podcast: https://www.devopsparadox.com/
💬 Live streams: https://www.youtube.com/c/DevOpsParadox

▬▬▬▬▬▬ 👋 Contact me 👋 ▬▬▬▬▬▬
➡ Follow me on Twitter: https://twitter.com/vfarcic
➡ Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/

❌
❌