Today, we’re excited to announce the release of Docker Build checks with Docker Desktop 4.33. Docker Build checks help your team learn and follow best practices for building container images. When you run a Docker Build, you will get a list of warnings for any check violations detected in your build. Taking a proactive approach and resolving Build warnings and issues early will save you time and headaches downstream.
Why did we create Docker Build checks?
During conversations with developers, we found that many struggle to learn and follow the best practices for building container images. According to our 2024 State of Application Development Report, 35% of Docker users reported creating and editing Dockerfiles as one of the top three tasks performed. However, 55% of respondents reported that creating Dockerfiles is the most selected task they refer to support.
Developers often don’t have the luxury of reading through the Docker Build docs, making the necessary changes to get things working, and then moving on. A Docker Build might “work” when you run docker build, but a poorly written Dockerfiles may introduce quality issues, such as they are:
Hard to maintain or update
Contain hidden and unexpected bugs
Have sub-optimal performance
In our conversations with Docker users, we heard that they want to optimize their Dockerfiles to improve build performance, aren’t aware of current best practices, and would like to be guided as they build.
Investigating and fixing build issues wastes time. We created Docker Build checks to empower developers to write well-structured Dockerfiles from the get-go and learn from existing best practices. With Build checks, your team spends less time on build issues and more on innovation and coding.
Why should you use Docker Build checks?
You want to write better Dockerfiles and save time!
We have collected a set of best practices from the community of build experts and codified them into Docker Build tooling. You can use Docker Build checks to evaluate all stages of your local and CI workflows, including multi-stage builds and Bake, and deep dive in the Docker Desktop Builds view. You can also choose which rules to skip.
You can access Docker Build checks in the CLI and in the Docker Desktop Builds view.
More than just linting: Docker Build checks are powerful and fast
Linting tools typically just evaluate the text files against a set of rules. As a native part of Docker Build, the rules in Docker Build checks are more powerful and accurate than just linting. Docker Build checks evaluate the entire build, including the arguments passed in and the base images used. These checks are quick enough to be run in real-time as you edit your Dockerfile. You can quickly evaluate a build without waiting for a full build execution.
Check your local builds
A good practice is to evaluate a new or updated Dockerfile before committing or sharing your changes. Running docker build will now give you an overview of issues and warnings in your Dockerfile.
Figure 1: A Docker Build with four check warnings displayed.
To get more information about these specific issues, you can specify the debug flag to the Docker CLI with docker --debug build. This information includes the type of warning, where it occurs, and a link to more information on how to resolve it.
Figure 2: Build debug output for the check warnings.
Quickly check your build
Running these checks during a build is great, but it can be time-consuming to wait for the complete build to run each time when you’re making changes or fixing issues. For this reason, we added the --check flag as part of the build command.
# The check flag can be added anywhere as part of your build command
docker build . --check
docker build --check .
docker build --build-arg VERSION=latest --platfrom linux/arm64 . --check
As illustrated in the following figure, appending the flag to your existing build command will do the same full evaluation of the build configuration without executing the entire build. This faster feedback typically completes in less than a second, making for a smoother development process.
Figure 3: Running check of build.
Check your CI builds
By default, running a Docker build with warnings will not cause the build to fail (return a non-zero exit code). However, to catch any regressions in your CI builds, add the following declarations to instruct the checks to generate errors.
# syntax=docker/dockerfile:1
# check=error=true
FROM alpine
CMD echo "Hello, world!"
Checking multi-stage builds in CI
During a build, only the specified stage/target, including its dependent, is executed. We recommend adding a stage check step in your workflow to do a complete evaluation of your Dockerfile. This is similar to how you would run automated tests before executing the full build.
If any warnings are detected, it will return a non-zero exit code, which will cause the workflow to fail, therefore catching any issues.
docker build --check .
Checking builds in Docker Build Cloud
Of course, this also works seamlessly with Docker Build Cloud, both locally and through CI. Use your existing cloud builders to evaluate your builds. Your team now has the combined benefit of Docker Build Cloud performance with the reassurance that the build will align with best practices. In fact, as we expand our checks, you should see even better performance from your Docker Build Cloud builds.
Figure 4: Running checks in Docker Build Cloud.
Configure rules
You have the flexibility to configure rules in Build checks with a skip argument. You can also specify skip=all or skip=none to toggle the rules on and off. Here’s an example of skipping the JSONArgsRecommended and StageNameCasing rules:
# syntax=docker/dockerfile:1
# check=skip=JSONArgsRecommended,StageNameCasing
FROM alpine AS BASE_STAGE
CMD echo "Hello, world!"
Dive deep into Docker Desktop Builds view
In Docker Desktop Builds view, you can see the output of the build warnings. Locating the cause of warnings in Dockerfiles and understanding how to resolve them quickly is now easy.
As with build errors, warnings are shown inline with your Dockerfile when inspecting a build in Docker Desktop:
Figure 5: Build checks warnings in Docker Desktop Builds view.
What’s next?
More checks
We are excited about the new Builds checks to help you apply best practices to your Dockfiles, but this is just the start. In addition to the current set of checks, we plan on adding even more to provide a more comprehensive evaluation of your builds. Further, we look forward to including custom checks and policies for your Docker builds.
IDE integration
The earlier you identify issues in your builds, the easier and less costly it is to resolve them. We plan to integrate Build checks with your favorite IDEs so you can get real-time feedback as you type.
Figure 6: Check violations displaying in VS Code.
GitHub Actions and Docker Desktop
You can already see Build checks warnings in Docker Desktop, but more detailed insights are coming soon to Docker Desktop. As you may have heard, we recently announced Inspecting Docker Builds in GitHub Actions’s beta release, and we plan to build on this new functionality to include support for investigating check warnings.
Get started now
To get started with Docker Build checks, upgrade to Docker Desktop 4.33 today and try them out with your existing Dockerfiles. Head over to our documentation for a more detailed breakdown of Build checks.
Learn more
Authenticate and update to receive your subscription level’s newest Docker Desktop features.
What else is new Docker Desktop 4.33? GA Releases of Docker Debug and Docker Build Checks Plus Enhanced Configuration Integrity Checks.
We are excited to announce that the latest BuildKit release, v0.13.0, contains experimental Windows Containers support. BuildKit has been around for many years and has been the default build engine on Linux since Docker Engine 23.0.0.
BuildKit is a toolkit for converting source code to build artifacts (like container images) in an efficient, expressive, and repeatable manner. BuildKit introduced the following benefits as compared with the previous Docker Builder:
Parallelize building independent build stages and skip any unused stages.
Incrementally transfer only the changed files in your build context between builds, also skip the transfer of unused files in your build context.
Use Dockerfile frontend implementations with many new features.
Avoid side effects with the rest of the API (intermediate images and containers).
Prioritize your build cache for automatic pruning.
Since 2018, Windows Container customers have been asking for Windows support for BuildKit, as seen in the BuildKit repo and Windows Containers repo, with hundreds of reactions and comments. We have listened to our users and focused resources in the past year to light up Windows Containers support on BuildKit.
Until now, we only shipped the Buildx client on Windows for building Linux images and some very limited Windows images using cross-compilation. Today, we are introducing experimental support for Windows Containers in BuildKit, with the aim of making this available soon in your standard Docker Build.
What’s next?
In the upcoming months, we will work toward further improvements, including:
General Availability (GA) ready: Improving release materials, including guides and documentation.
Integration with Docker Engine: So you can just run docker build.
OCI worker support: On Linux, there is an option to run BuildKit with only runc using the OCI worker. Currently, only the containerd worker is supported for Windows.
Container driver: Add support for running in the container driver.
Image outputs: Some image outputs supported by Linux may not work on Windows and need to be tested and assessed. These include exporting an image to multiple registries, checking if keys for image output are supported, and testing multi-platform image-building support.
Building other artifacts: BuildKit can be used to build other artifacts beyond container images. Work needs to be done in this area to cross-check whether other artifacts, such as binaries, libraries, and documentation, are also supported on Windows as it is on Linux.
Running buildkitd doesn’t require Admin: Currently, running buildkitd on Windows requires admin privileges. We will be looking into running buildkitd on low privileges, aka “rootless”.
Export cache: Investigations need to be done to confirm whether specific cache exporters (inline, registry, local, gha [GitHub Actions], 3, azblob) are also supported on Windows.
Linux parity: Identifying, accessing, and closing the feature parity gap between Windows and Linux.
Walkthrough — Build a basic “Hello World” image with BuildKit and Windows Containers
Let’s walk through the process of setting up BuildKit, including the necessary dependencies, and show how to build a basic Windows image. For feedback and issues, file a ticket at Issues · moby/buildkit (github.com) tagged with area/windows.
The platform requirements are listed below. In our scenario, we will be running a nanoserver:ltsc2022 base image with AMD64.
Architecture: AMD64, Arm64 (binaries available but not officially tested yet).
Supported operating systems: Windows Server 2019, Windows Server 2022, Windows 11.
Base images: servercore:ltsc2019, servercore:ltsc2022, nanoserver:ltsc2022. See the compatibility map.
The workflow will cover the following steps:
Enable Windows Containers.
Install containerd.
Install BuildKit.
Build a simple “Hello World” image.
1. Enable Windows Containers
Start a PowerShell terminal in admin privilege mode. Run the following command to ensure the Containers feature is enabled:
If you see RestartNeeded as True on your setup, restart your machine and reopen an Administrator PowerShell terminal (Figure 1). Otherwise, continue to the next step.
Figure 1: Enabling Windows Containers in PowerShell.
2. Install containerd
Next, we need to install containerd, which is used as the container runtime for managing containers and images.
Note: We currently only support the containerd worker. In the future, we plan to add support for the OCI worker, which uses runc and will therefore remove this dependency
Run the following script to install the latestcontainerd release. If you have containerd already installed, skip the script below and run Start-Service containerd to start the containerd service.
Note: containerd v1.7.7+ is required.
# If containerd previously installed run:
Stop-Service containerd
# Download and extract desired containerd Windows binaries
$Version="1.7.13" # update to your preferred version
curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
tar.exe xvf .\containerd-windows-amd64.tar.gz
# Copy and configure
Copy-Item -Path ".\bin" -Destination "$Env:ProgramFiles\containerd" -Recurse -Container:$false -Force
cd $Env:ProgramFiles\containerd\
.\containerd.exe config default | Out-File config.toml -Encoding ascii
# Copy
Copy-Item -Path .\bin\* -Destination (New-Item -Type Directory $Env:ProgramFiles\containerd -Force) -Recurse -Force
# add the binaries (containerd.exe, ctr.exe) in $env:Path
$Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + [IO.Path]::PathSeparator + "$Env:ProgramFiles\containerd"
[Environment]::SetEnvironmentVariable( "Path", $Path, "Machine")
# reload path, so you don't have to open a new PS terminal later if needed
$Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
# configure
containerd.exe config default | Out-File $Env:ProgramFiles\containerd\config.toml -Encoding ascii
# Review the configuration. Depending on setup you may want to adjust:
# - the sandbox_image (Kubernetes pause image)
# - cni bin_dir and conf_dir locations
Get-Content $Env:ProgramFiles\containerd\config.toml
# Register and start service
containerd.exe --register-service
Start-Service containerd
3. Install BuildKit
Note: Ensure you have updated to the latest version of Docker Desktop.
Run the following script to download and extract the latest BuildKit release.
$version = "v0.13.0" # specify the release version, v0.13+
$arch = "amd64" # arm64 binary available too
curl.exe -LO https://github.com/moby/buildkit/releases/download/$version/buildkit-$version.windows-$arch.tar.gz
# there could be another `.\bin` directory from containerd instructions
# you can move those
mv bin bin2
tar.exe xvf .\buildkit-$version.windows-$arch.tar.gz
## x bin/
## x bin/buildctl.exe
## x bin/buildkitd.exe
Next, run the following commands to add the BuildKit binaries to your Program Files directory, then add them to the PATH so they can be called directly.
# after the binaries are extracted in the bin directory
# move them to an appropriate path in your $Env:PATH directories or:
Copy-Item -Path ".\bin" -Destination "$Env:ProgramFiles\buildkit" -Recurse -Force
# add `buildkitd.exe` and `buildctl.exe` binaries in the $Env:PATH
$Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + `
[IO.Path]::PathSeparator + "$Env:ProgramFiles\buildkit"
[Environment]::SetEnvironmentVariable( "Path", $Path, "Machine")
$Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + `
[System.Environment]::GetEnvironmentVariable("Path","User")
Run buildkitd.exe. You should expect to see something as shown in Figure 2:
Figure 2: Successfully starting buildkitd without any errors in the logs.
Now we can set up buildx (the BuildKit client) to use our BuildKit instance. Here we will create a Builder item that points to our Buildkit instance we just started, by running:
Here we are creating a new instance of a builder and pointing it to our BuildKit instance. BuildKit will listen on npipe:////./pipe/buildkitd.
Notice that we also name the builder, here, we call it buildkit-exp, but you can name it whatever you want. Just remember to add --use to set this as the current builder.
Let’s test our connection by running docker buildx inspect (Figure 3):
Figure 3: Docker buildx inspect shows that our new builder is connected.
FROM mcr.microsoft.com/windows/nanoserver:ltsc2022
COPY hello.txt C:
CMD ["cmd", "/C", "type C:\\hello.txt"]
Run the following commands to create a directory and change directory to sample_dockerfile.
mkdir sample_dockerfile
cd sample_dockerfile
Run the following script to add the Dockerfile shown above and hello.txt to the sample_dockerfile directory.
Set-Content Dockerfile @"
FROM mcr.microsoft.com/windows/nanoserver:ltsc2022
USER ContainerAdministrator
COPY hello.txt C:/
RUN echo "Goodbye!" >> hello.txt
CMD ["cmd", "/C", "type C:\\hello.txt"]
"@
Set-Content hello.txt @"
Hello from buildkit!
This message shows that your installation appears to be working correctly.
"@
Now we can use buildx to build our image and push it to the registry (see Figure 5):
Figure 5: Here we can see our build running to a successful completion.
If you are utilizing Docker Hub as your registry, run docker login before running buildx build (Figure 6).
Figure 6: Successful login to Docker Hub so we can publish our images.
Congratulations! You can now run containers with standard docker run:
docker run <HUB ACCOUNT NAME>/hello-buildkit
Get started with BuildKit
We encourage you to test out the released experimental Windows BuildKit support v0.13.0. To start out, feel free to follow the documentation or blog, which will walk you through building a simple Windows image with BuildKit. File feedback and issues at Issues · moby/buildkit (github.com) tagged with area/windows.
Dockerfiles are fundamental tools for developers working with Docker, serving as a blueprint for creating Docker images. These text documents contain all the commands a user could call on the command line to assemble an image. Understanding and effectively utilizing Dockerfiles can significantly streamline the development process, allowing for the automation of image creation and ensuring consistent environments across different stages of development. Dockerfiles are pivotal in defining project environments, dependencies, and the configuration of applications within Docker containers.
With new versions of the BuildKit builder toolkit, Docker Buildx CLI, and Dockerfile frontend for BuildKit (v1.7.0), developers now have access to enhanced Dockerfile capabilities. This blog post delves into these new Dockerfile capabilities and explains how you can can leverage them in your projects to further optimize your Docker workflows.
Versioning
Before we get started, here’s a quick reminder of how Dockerfile is versioned and what you should do to update it.
Although most projects use Dockerfiles to build images, BuildKit is not limited only to that format. BuildKit supports multiple different frontends for defining the build steps for BuildKit to process. Anyone can create these frontends, package them as regular container images, and load them from a registry when you invoke the build.
With the new release, we have published two such images to Docker Hub: docker/dockerfile:1.7.0 and docker/dockerfile:1.7.0-labs.
To use these frontends, you need to specify a #syntax directive at the beginning of the file to tell BuildKit which frontend image to use for the build. Here we have set it to use the latest of the 1.x.x major version. For example:
#syntax=docker/dockerfile:1
FROM alpine
...
This means that BuildKit is decoupled from the Dockerfile frontend syntax. You can start using new Dockerfile features right away without worrying about which BuildKit version you’re using. All the examples described in this article will work with any version of Docker that supports BuildKit (the default builder as of Docker 23), as long as you define the correct #syntax directive on the top of your Dockerfile.
When you write Dockerfiles, build steps can contain variables that are defined using the build arguments (ARG) and environment variables (ENV) instructions. The difference between build arguments and environment variables is that environment variables are kept in the resulting image and persist when a container is created from it.
When you use such variables, you most likely use ${NAME} or, more simply, $NAME in COPY, RUN, and other commands.
You might not know that Dockerfile supports two forms of Bash-like variable expansion:
${variable:-word}: Sets a value to word if the variable is unset
${variable:+word}: Sets a value to word if the variable is set
Up to this point, these special forms were not that useful in Dockerfiles because the default value of ARG instructions can be set directly:
FROM alpine
ARG foo="default value"
If you are an expert in various shell applications, you know that Bash and other tools usually have many additional forms of variable expansion to ease the development of your scripts.
In Dockerfile v1.7, we have added:
${variable#pattern} and ${variable##pattern} to remove the shortest or longest prefix from the variable’s value.
${variable%pattern} and ${variable%%pattern} to remove the shortest or longest suffix from the variable’s value.
${variable/pattern/replacement} to first replace occurrence of a pattern
${variable//pattern/replacement} to replace all occurrences of a pattern
How these rules are used might not be completely obvious at first. So, let’s look at a few examples seen in actual Dockerfiles.
For example, projects often can’t agree on whether versions for downloading your dependencies should have a “v” prefix or not. The following allows you to get the format you need:
# example VERSION=v1.2.3
ARG VERSION=${VERSION#v}
# VERSION is now '1.2.3'
In the next example, multiple variants are used by the same project:
To configure different command behaviors for multi-platform builds, BuildKit provides useful built-in variables like TARGETOS and TARGETARCH. Unfortunately, not all projects use the same values. For example, in containers and the Go ecosystem, we refer to 64-bit ARM architecture as arm64, but sometimes you need aarch64 instead.
In this case, the URL also uses a custom name for AMD64 architecture. To pass a variable through multiple expansions, use another ARG definition with an expansion from the previous value. You could also write all the definitions on a single line, as ARG allows multiple parameters, which may hurt readability.
Note that the example above is written in a way that if a user passes their own --build-arg ARCH=value, then that value is used as-is.
Now, let’s look at how new expansions can be useful in multi-stage builds.
One of the techniques described in “Advanced multi-stage build patterns” shows how build arguments can be used so that different Dockerfile commands run depending on the build-arg value. For example, you can use that pattern if you build a multi-platform image and want to run additional COPY or RUN commands only for specific platforms. If this method is new to you, you can learn more about it from that post.
In summarized form, the idea is to define a global build argument and then define build stages that use the build argument value in the stage name while pointing to the base of your target stage via the build-arg name.
Old example:
ARG BUILD_VERSION=1
FROM alpine AS base
RUN …
FROM base AS branch-version-1
RUN touch version1
FROM base AS branch-version-2
RUN touch version2
FROM branch-version-${BUILD_VERSION} AS after-condition
FROM after-condition
RUN …
When using this pattern for multi-platform builds, one of the limitations is that all the possible values for the build-arg need to be defined by your Dockerfile. This is problematic as we want Dockerfile to be built in a way that it can build on any platform and not limit it to a specific set.
You can see other examples here and here of Dockerfiles where dummy stage aliases must be defined for all architectures, and no other architecture can be built. Instead, the pattern we would like to use is that there is one architecture that has a special behavior, and everything else shares another common behavior.
With new expansions, we can write this to demonstrate running special commands only on RISC-V, which is still somewhat new and may need custom behavior:
#syntax=docker/dockerfile:1.7
ARG ARCH=${TARGETARCH#riscv64}
ARG ARCH=${ARCH:+"common"}
ARG ARCH=${ARCH:-$TARGETARCH}
FROM --platform=$BUILDPLATFORM alpine AS base-common
ARG TARGETARCH
RUN echo "Common build, I am $TARGETARCH" > /out
FROM --platform=$BUILDPLATFORM alpine AS base-riscv64
ARG TARGETARCH
RUN echo "Riscv only special build, I am $TARGETARCH" > /out
FROM base-${ARCH} AS base
Let’s look at these ARCH definitions more closely.
The first sets ARCH to TARGETARCH but removes riscv64 from the value.
Next, as we described previously, we don’t actually want the other architectures to use their own values but instead want them all to share a common value. So, we set ARCH to common except if it was cleared from the previous riscv64 rule.
Now, if we still have an empty value, we default it back to $TARGETARCH.
The last definition is optional, as we would already have a unique value for both cases, but it makes the final stage name base-riscv64 nicer to read.
Additional examples of including multiple conditions with shared conditions, or conditions based on architecture variants can be found in this GitHub Gist page.
Comparing this example to the initial example of conditions between stages, the new pattern isn’t limited to just controlling the platform differences of your builds but can be used with any build-arg. If you have used this pattern before, then you can effectively now define an “else” clause, whereas previously, you were limited to only “if” clauses.
Copy with keeping parent directories
The following feature has been released in the “labs” channel. Define the following at the top of your Dockerfile to use this feature.
#syntax=docker/dockerfile:1.7-labs
When you are copying files in your Dockerfile, for example, do this:
COPY app/file /to/dest/dir/
This example means the source file is copied directly to the destination directory. If your source path was a directory, all the files inside that directory would be copied directly to the destination path.
What if you have a file structure like the following:
You want to copy only files in app1/src, but so that the final files at the destination would be /to/dest/dir/app1/src/server.go and not just /to/dest/dir/server.go.
With the new COPY --parents flag, you can write:
COPY --parents /app1/src/ /to/dest/dir/
This will copy the files inside the src directory and recreate the app1/src directory structure for these files.
Things get more powerful when you start to use wildcard paths. To copy the src directories for both apps into their respective locations, you can write:
COPY --parents */src/ /to/dest/dir/
This will create both /to/dest/dir/app1 and /to/dest/dir/app2, but it will not copy the docs directory. Previously, this kind of copy was not possible with a single command. You would have needed multiple copies for individual files (as shown in this example) or used some workaround with the RUN --mount instruction instead.
You can also use double-star wildcard (**) to match files under any directory structure. For example, to copy only the Go source code files anywhere in your build context, you can write:
COPY --parents **/*.go /to/dest/dir/
If you are thinking about why you would need to copy specific files instead of just using COPY ./ to copy all files, remember that your build cache gets invalidated when you include new files in your build. If you copy all files, the cache gets invalidated when any file is added or changed, whereas if you copy only Go files, only changes in these files influence the cache.
The new --parents flag is not only for COPY instructions from your build context, but obviously, you can also use them in multi-stage builds when copying files between stages using COPY --from.
Note that with COPY --from syntax, all source paths are expected to be absolute, meaning that if the --parents flag is used with such paths, they will be fully replicated as they were in the source stage. That may not always be desirable, and instead, you may want to keep some parents but discard and replace others. In that case, you can use a special /./ relative pivot point in your source path to mark which parents you wish to copy and which should be ignored. This special path component resembles how rsync works with the --relative flag.
#syntax=docker/dockerfile:1.7-labs
FROM ... AS base
RUN ./generate-lot-of-files -o /out/
# /out/usr/bin/foo
# /out/usr/lib/bar.so
# /out/usr/local/bin/baz
FROM scratch
COPY --from=base --parents /out/./**/bin/ /
# /usr/bin/foo
# /usr/local/bin/baz
This example above shows how only bin directories are copied from the collection of files that the intermediate stage generated, but all the directories will keep their paths relative to the out directory.
Exclusion filters
The following feature has been released in the “labs” channel. Define the following at the top of your Dockerfile to use this feature:
#syntax=docker/dockerfile:1.7-labs
Another related case when moving files in your Dockerfile with COPY and ADD instructions is when you want to move a group of files but exclude a specific subset. Previously, your only options were to use RUN --mount or try to define your excluded files inside a .dockerignore file.
.dockerignore files, however, are not a good solution for this problem, because they only list the files excluded from the client-side build context and not from builds from remote Git/HTTP URLs and are limited to one per Dockerfile. You should use them similarly to .gitignore to mark files that are never part of your project but not as a way to define your application-specific build logic.
With the new --exclude=[pattern] flag, you can now define such exclusion filters for your COPY and ADD commands directly in the Dockerfile. The pattern uses the same format as .dockerignore.
The following example copies all the files in a directory except Markdown files:
COPY --exclude=*.md app /dest/
You can use the flag multiple times to add multiple filters. The next example excludes Markdown files and also a file called README:
COPY --exclude=*.md --exclude=README app /dest/
Double-star wildcards exclude not only Markdown files in the copied directory but also in any subdirectory:
COPY --exclude=**/*.md app /dest/
As in .dockerignore files, you can also define exceptions to the exclusions with ! prefix. The following example excludes all Markdown files in any copied directory, except if the file is called important.md — in that case, it is still copied.
This double negative may be confusing initially, but note that this is a reversal of the previous exclude rule, and “include patterns” are defined by the source parameter of the COPY instruction.
When using --exclude together with previously described --parents copy mode, note that the exclude patterns are relative to the copied parent directories or to the pivot point /./ if one is defined. See the following directory structure for example:
This command would create the directory structure below. Note that only directories with the icons prefix were copied, the root parent directory assets was skipped as it was before the relative pivot point, and additionally, testapp was not copied as it was defined with an exclusion filter.
We hope this post gave you ideas for improving your Dockerfiles and that the patterns shown here will help you describe your build more efficiently. Remember that your Dockerfile can start using all these features today by defining the #syntax line on top, even if you haven’t updated to the latest Docker yet.
For a full list of other features in the new BuildKit, Buildx, and Dockerfile releases, check out the changelogs: