It’s now been a couple of weeks since we released the new Docker DX extension for Visual Studio Code. This launch reflects a deeper collaboration between Docker and Microsoft to better support developers building containerized applications.
Over the past few weeks, you may have noticed some changes to your Docker extension in VS Code. We want to take a moment to explain what’s happening—and where we’re headed next.
What’s Changing?
The original Docker extension in VS Code is being migrated to the newContainer Tools extension, maintained by Microsoft. It’s designed to make it easier to build, manage, and deploy containers—streamlining the container development experience directly inside VS Code.
As part of this partnership, it was decided to bundle the new Docker DX extension with the existing Docker extension, so that it would install automatically to make the process seamless.
While the automatic installation was intended to simplify the experience, we realize it may have caught some users off guard. To provide more clarity and choice, the next release will make Docker DX Extension an opt-in installation, giving you full control over when and how you want to use it.
What’s New from Docker?
Docker is introducing the newDocker DX extension, focused on delivering a best-in-class authoring experience for Dockerfiles, Compose files, and Bake files
Key features include:
Dockerfile linting: Get build warnings and best-practice suggestions directly from BuildKit and Buildx—so you can catch issues early, right inside your editor.
Image vulnerability remediation (experimental): Automatically flag references to container images with known vulnerabilities, directly in your Dockerfiles.
Bake file support: Enjoy code completion, variable navigation, and inline suggestions when authoring Bake files—including the ability to generate targets based on your Dockerfile stages.
Compose file outline: Easily navigate and understand complex Compose files with a new outline view in the editor.
Better Together
These two extensions are designed to work side-by-side, giving you the best of both worlds:
Powerful tooling to build, manage, and deploy your containers
Smart, contextual authoring support for Dockerfiles, Compose files, and Bake files
And the best part? Both extensions are free and fully open source.
Thank You for Your Patience
We know changes like this can be disruptive. While our goal was to make the transition as seamless as possible, we recognize that the approach caused some confusion, and we sincerely apologize for the lack of early communication.
The teams at Docker and Microsoft are committed to delivering the best container development experience possible—and this is just the beginning.
Where Docker DX is Going Next
At Docker, we’re proud of the contributions we’ve made to the container ecosystem, including Dockerfiles, Compose, and Bake.
We’re committed to ensuring the best possible experience when editing these files in your IDE, with instant feedback while you work.
Here’s a glimpse of what’s coming:
Expanded Dockerfile checks: More best-practice validations, actionable tips, and guidance—surfaced right when you need them.
Stronger security insights: Deeper visibility into vulnerabilities across your Dockerfiles, Compose files, and Bake configurations.
Improved debugging and troubleshooting: Soon, you’ll be able to live debug Docker builds—step through your Dockerfile line-by-line, inspect the filesystem at each stage, see what’s cached, and troubleshoot issues faster.
We Want Your Feedback!
Your feedback is critical in helping us improve the Docker DX extension and your overall container development experience.
If you encounter any issues or have ideas for enhancements you’d like to see, please let us know:
We’re excited to announce the General Availability of Docker Bake with Docker Desktop 4.38! This powerful build orchestration tool takes the hassle out of managing complex builds and offers simplicity, flexibility, and performance for teams of all sizes.
What is Docker Bake?
Docker Bake is an orchestration tool that streamlines Docker builds, similar to how Compose simplifies managing runtime environments. With Bake, you can define build stages and deployment environments in a declarative file, making complex builds easier to manage. It also leverages BuildKit’s parallelization and optimization features to speed up build times.
While Dockerfiles are excellent for defining image build steps, teams often need to build multiple images and execute helper tasks like testing, linting, and code generation. Traditionally, this meant juggling numerous docker build commands with their own options and arguments – a tedious and error-prone process.
Bake changes the game by introducing a declarative file format that encapsulates all options and image dependencies, referred to as targets. Additionally, Bake’s ability to parallelize and deduplicate work ensures faster and more efficient builds.
Why should you use Bake?
Challenges with complex Docker Build configuration:
Managing long, complex build commands filled with countless flags and environment variables.
Tedious workflows for building multiple images.
Difficulty declaring builds for specific targets or environments.
Requires a script or 3rd-party tool to make things manageable
Docker Bake tackles these challenges with a better way to manage complex builds with a simple, declarative approach.
Key benefits of Docker Bake
Simplicity: Replace complex chains of Docker build commands and scripts with a single docker buildx bake command while maintaining clear, version-controlled configuration files that are easy to understand and modify.
Flexibility: Express sophisticated build logic through HCL syntax and matrix builds, enabling dynamic configurations that adapt to different environments and requirements while supporting custom functions for advanced use cases.
Consistency: Maintain standardized build configurations across teams and environments through version-controlled files and inheritance patterns, eliminating environment-specific build issues and reducing configuration drift.
Performance: Automatically parallelize independent builds and eliminate redundant operations through context deduplication and intelligent caching, dramatically reducing build times for complex multi-image workflows.
Figure 1: One simple Docker buildx bake command to replace all the flags and environment variables.
Use cases for Docker Bake
1. Monorepo and Image Bakery
Docker Bake can help developers efficiently manage and build multiple related Docker images from a single source repository. Plus, they can leverage shared configurations and automated dependency handling to enforce organizational standards.
Development Efficiency: Teams can maintain consistent build logic across dozens or hundreds of microservices in a single repository, reducing configuration drift and maintenance overhead.
Resource Optimization: Shared base images and contexts are automatically deduplicated, dramatically reducing build times and storage costs.
Standardization: Enforce organizational standards through inherited configurations, ensuring all services follow security, tagging, and testing requirements.
Change Management: A single source of truth for build configurations makes it easier to implement organization-wide changes like base image updates or security patches.
2. Compose users
Docker Bake provides seamless compatibility with existing docker-compose.yml files, allowing direct use of your current configurations. Existing Compose users are able to get started using Bake with minimal effort.
Gradual Adoption: Teams can incrementally adopt advanced build features while still leveraging their existing compose workflows and knowledge.
Development Consistency: Use the same configuration for both local development (via compose) and production builds (via Bake), eliminating “works on my machine” issues.
Enhanced Capabilities: Access powerful features like matrix builds and HCL expressions while maintaining compatibility with familiar compose syntax.
CI/CD Integration: Seamlessly integrate with existing CI/CD pipelines that already understand compose files while adding Bake’s advanced build capabilities.
Cross-Platform Compatibility: Matrix builds enable teams to efficiently manage builds across multiple architectures, OS versions, and dependency combinations from a single configuration.
Dynamic Adaptation: HCL expressions allow builds to adapt to different environments, git branches, or CI variables without maintaining multiple configurations.
Build Optimization: Custom functions enable sophisticated logic for things like version calculation, tag generation, and conditional builds based on git history.
Quality Control: Variable validation and inheritance ensure consistent configuration across complex build scenarios, reducing errors and maintenance burden.
Scale Management: Groups and targets help organize large-scale build systems with dozens or hundreds of permutations, making them manageable and maintainable.
4. Docker Build Cloud
With Bake-optimized builds as the foundation, developers can achieve more efficient Docker Build Cloud performance and faster builds.
Enhanced Docker Build Cloud Performance: Instantly parallelize matrix builds across cloud infrastructure, turning hour-long build pipelines into minutes without managing build infrastructure.
Resource Optimization: Leverage Build Cloud’s distributed caching and deduplication to dramatically reduce bandwidth usage and build times, which is especially valuable for remote teams.
Cost Management: Save cost with DBC — Bake’s precise target definitions mean you only consume cloud resources for exactly what needs to be built.
Developer Experience: Teams can run complex multi-architecture builds without powerful local machines, enabling development from any device while maintaining build performance.
CI/CD Enhancement: Offload resource-intensive builds from CI runners to Build Cloud, reducing CI costs and queue times while improving reliability.
What’s New in Bake for GA?
Docker Bake has been an experimental feature for several years, allowing us to refine and improve it based on user feedback. So, there is already a strong set of ingredients that users love, such as targets and groups, variables, HCL Expression Support, inheritance capabilities, matrix targets, and additional contexts. With this GA release, Bake is now ready for production use, and we’ve added several enhancements to make it more efficient, secure, and easier to use:
Deduplicated Context Transfers: Significantly speeds up build pipelines by eliminating redundant file transfers when multiple targets share the same build context.
Entitlements: Enhances security and resource management by providing fine-grained control over what capabilities and resources builders can access during the build process.
Composable Attributes: Simplifies configuration management by allowing teams to define reusable attribute sets that can be mixed, matched, and overridden across different targets.
Variable Validation: Prevents wasted time and resources by catching configuration errors before the actual build process begins.
Deduplicate context transfers
When you build targets concurrently using groups, build contexts are loaded independently for each target. If the same context is used by multiple targets in a group, that context is transferred once for each time it’s used. This can significantly impact build time, depending on your build configuration.
Previously, the workaround required users to define a named context that loads the context files and then have each target reference the named context. But with Bake, this will be handled automatically now.
Bake can automatically deduplicate context transfers from targets sharing the same context. When you build targets concurrently using groups, build contexts are loaded independently for each target. If the same context is used by multiple targets in a group, that context is transferred once for each time it’s used. This more efficient approach leads to much faster build time.
Read more about how to speed up your build time in our docs.
Entitlements
Bake now includes entitlements to control access to privileged operations, aligning with Build. This prevents unintended side effects and security risks. If Bake detects a potential issue — like a privileged access request or an attempt to access files outside the current directory — the build will fail unless explicitly allowed.
To be consistent, the Bake command now supports the --allow=ENTITLEMENT flag to grant access to additional entitlements. The following entitlements are currently supported for Bake.
Build equivalents
--allow network.host— Allows executions with host networking.
--allow security.insecure— Allows executions without sandbox. (i.e. —privileged)
File system: Grant filesystem access for builds that need access files outside the working directory. This will impact context, output, cache-from, cache-to, dockerfile, secret
--allow fs=<path|*>— Grant read and write access to files outside the working directory.
--allow fs.read=<path|*>— Grant read access to files outside the working directory.
--allow fs.write=<path|*> — Grant write access to files outside the working directory.
ssh
--allow ssh—– Allows exposing SSH agent.
Composable attributes
Several attributes previously had to be defined in CSV (e.g. type=provenance,mode=min). These were challenging to read and couldn’t be easily overridden. The following can now be defined as structured objects:
target "app" {
attest = [
{ type = "provenance", mode = "max" },
{ type = "sbom", disabled = true}
]
cache-from = [
{ type = "registry", ref = "user/app:cache" },
{ type = "local", src = "path/to/cache"}
]
cache-to = [
{ type = "local", dest = "path/to/cache" },
]
output = [
{ type = "oci", dest = "../out.tar" },
{ type = "local", dest="../out"}
]
secret = [
{ id = "mysecret", src = "path/to/secret" },
{ id = "mysecret2", env = "TOKEN" },
]
ssh = [
{ id = "default" },
{ id = "key", paths = ["path/to/key"] },
]
}
As such, the attributes are now composable. Teams can mix, match, and override attributes across different targets which simplifies configuration management.
Bake now supports validation for variables similar to Terraform to help developers catch and resolve configuration errors early. The GA for Bake also supports the following use cases.
Basic validation
To verify that the value of a variable conforms to an expected type, value range, or other condition, you can define custom validation rules using the validation block.
You can reference other Bake variables in your condition expression, enabling validations that enforce dependencies between variables. This ensures that dependent variables are set correctly before proceeding.
variable "FOO" {}
variable "BAR" {
validation {
condition = FOO != ""
error_message = "BAR requires FOO to be set."
}
}
target "default" {
args = {
BAR = BAR
}
}
New Bake options
In addition to updating the Bake configuration, we’ve added a new –list option. Previously, if you were unfamiliar with a project or wanted a reminder of the supported targets and variables, you would have to read through the file. Now, the list option will allow you to quickly query a list of them. It also supports the JSON format option if you need programmatic access.
List target
Quickly get a list of the targets available in your Bake configuration.
These improvements build on a powerful feature set, ensuring Bake is both reliable and future-ready.
Get started with Docker Bake
Ready to simplify your builds? Update to Docker Desktop 4.38 today and start using Bake. With its declarative syntax and advanced features, Docker Bake is here to help you build faster, more efficiently, and with less effort.
Explore the documentation to learn how to create your first Bake file and experience the benefits of streamlined builds firsthand.
Today, we’re excited to announce the release of Docker Build checks with Docker Desktop 4.33. Docker Build checks help your team learn and follow best practices for building container images. When you run a Docker Build, you will get a list of warnings for any check violations detected in your build. Taking a proactive approach and resolving Build warnings and issues early will save you time and headaches downstream.
Why did we create Docker Build checks?
During conversations with developers, we found that many struggle to learn and follow the best practices for building container images. According to our 2024 State of Application Development Report, 35% of Docker users reported creating and editing Dockerfiles as one of the top three tasks performed. However, 55% of respondents reported that creating Dockerfiles is the most selected task they refer to support.
Developers often don’t have the luxury of reading through the Docker Build docs, making the necessary changes to get things working, and then moving on. A Docker Build might “work” when you run docker build, but a poorly written Dockerfiles may introduce quality issues, such as they are:
Hard to maintain or update
Contain hidden and unexpected bugs
Have sub-optimal performance
In our conversations with Docker users, we heard that they want to optimize their Dockerfiles to improve build performance, aren’t aware of current best practices, and would like to be guided as they build.
Investigating and fixing build issues wastes time. We created Docker Build checks to empower developers to write well-structured Dockerfiles from the get-go and learn from existing best practices. With Build checks, your team spends less time on build issues and more on innovation and coding.
Why should you use Docker Build checks?
You want to write better Dockerfiles and save time!
We have collected a set of best practices from the community of build experts and codified them into Docker Build tooling. You can use Docker Build checks to evaluate all stages of your local and CI workflows, including multi-stage builds and Bake, and deep dive in the Docker Desktop Builds view. You can also choose which rules to skip.
You can access Docker Build checks in the CLI and in the Docker Desktop Builds view.
More than just linting: Docker Build checks are powerful and fast
Linting tools typically just evaluate the text files against a set of rules. As a native part of Docker Build, the rules in Docker Build checks are more powerful and accurate than just linting. Docker Build checks evaluate the entire build, including the arguments passed in and the base images used. These checks are quick enough to be run in real-time as you edit your Dockerfile. You can quickly evaluate a build without waiting for a full build execution.
Check your local builds
A good practice is to evaluate a new or updated Dockerfile before committing or sharing your changes. Running docker build will now give you an overview of issues and warnings in your Dockerfile.
Figure 1: A Docker Build with four check warnings displayed.
To get more information about these specific issues, you can specify the debug flag to the Docker CLI with docker --debug build. This information includes the type of warning, where it occurs, and a link to more information on how to resolve it.
Figure 2: Build debug output for the check warnings.
Quickly check your build
Running these checks during a build is great, but it can be time-consuming to wait for the complete build to run each time when you’re making changes or fixing issues. For this reason, we added the --check flag as part of the build command.
# The check flag can be added anywhere as part of your build command
docker build . --check
docker build --check .
docker build --build-arg VERSION=latest --platfrom linux/arm64 . --check
As illustrated in the following figure, appending the flag to your existing build command will do the same full evaluation of the build configuration without executing the entire build. This faster feedback typically completes in less than a second, making for a smoother development process.
Figure 3: Running check of build.
Check your CI builds
By default, running a Docker build with warnings will not cause the build to fail (return a non-zero exit code). However, to catch any regressions in your CI builds, add the following declarations to instruct the checks to generate errors.
# syntax=docker/dockerfile:1
# check=error=true
FROM alpine
CMD echo "Hello, world!"
Checking multi-stage builds in CI
During a build, only the specified stage/target, including its dependent, is executed. We recommend adding a stage check step in your workflow to do a complete evaluation of your Dockerfile. This is similar to how you would run automated tests before executing the full build.
If any warnings are detected, it will return a non-zero exit code, which will cause the workflow to fail, therefore catching any issues.
docker build --check .
Checking builds in Docker Build Cloud
Of course, this also works seamlessly with Docker Build Cloud, both locally and through CI. Use your existing cloud builders to evaluate your builds. Your team now has the combined benefit of Docker Build Cloud performance with the reassurance that the build will align with best practices. In fact, as we expand our checks, you should see even better performance from your Docker Build Cloud builds.
Figure 4: Running checks in Docker Build Cloud.
Configure rules
You have the flexibility to configure rules in Build checks with a skip argument. You can also specify skip=all or skip=none to toggle the rules on and off. Here’s an example of skipping the JSONArgsRecommended and StageNameCasing rules:
# syntax=docker/dockerfile:1
# check=skip=JSONArgsRecommended,StageNameCasing
FROM alpine AS BASE_STAGE
CMD echo "Hello, world!"
Dive deep into Docker Desktop Builds view
In Docker Desktop Builds view, you can see the output of the build warnings. Locating the cause of warnings in Dockerfiles and understanding how to resolve them quickly is now easy.
As with build errors, warnings are shown inline with your Dockerfile when inspecting a build in Docker Desktop:
Figure 5: Build checks warnings in Docker Desktop Builds view.
What’s next?
More checks
We are excited about the new Builds checks to help you apply best practices to your Dockfiles, but this is just the start. In addition to the current set of checks, we plan on adding even more to provide a more comprehensive evaluation of your builds. Further, we look forward to including custom checks and policies for your Docker builds.
IDE integration
The earlier you identify issues in your builds, the easier and less costly it is to resolve them. We plan to integrate Build checks with your favorite IDEs so you can get real-time feedback as you type.
Figure 6: Check violations displaying in VS Code.
GitHub Actions and Docker Desktop
You can already see Build checks warnings in Docker Desktop, but more detailed insights are coming soon to Docker Desktop. As you may have heard, we recently announced Inspecting Docker Builds in GitHub Actions’s beta release, and we plan to build on this new functionality to include support for investigating check warnings.
Get started now
To get started with Docker Build checks, upgrade to Docker Desktop 4.33 today and try them out with your existing Dockerfiles. Head over to our documentation for a more detailed breakdown of Build checks.
Learn more
Authenticate and update to receive your subscription level’s newest Docker Desktop features.
What else is new Docker Desktop 4.33? GA Releases of Docker Debug and Docker Build Checks Plus Enhanced Configuration Integrity Checks.
We’re excited to announce the beta release of a new feature for inspecting GitHub Actions builds directly in Docker Desktop 4.31.
Centralized CI/CD environments, such as GitHub Actions, are popular and useful, giving teams a single location to build, test, and verify their deployments. However, remote processes, such as builds in GitHub Actions, often lack visibility into what’s happening in your Docker builds. This means developers often need additional builds and steps to locate the root cause of issues, making diagnosing and resolving build issues challenging.
To help, we’re introducing enhancements in GitHub Actions Summary and Docker Desktop to provide a deeper understanding of your Docker builds in GitHub Actions.
Get a high-level view with Docker Build Summary in GitHub Actions
We now provide Docker Build Summary, a GitHub Actions Summary that displays reports and aggregates build information. The Docker Build Summary offers additional details about your builds in GitHub Actions, including a high-level summary of performance metrics, such as build duration, and cache utilization (Figure 1). Users of docker/build-push-action and docker/bake-action will automatically receive Docker Build Summaries.
Key benefits
Identify build failures: Immediate access to error details eliminates the need to sift through logs.
Performance metrics: See exactly how long the Docker Build stage took and assess if it met expectations.
Cache utilization: View the percentage of the build that used the cache to identify performance impacts.
Configuration details: Access information on build inputs to understand what ran during build time.
Figure 1: Animated view of Docker Build Summary in GitHub Actions, showing Build details, including Build status, error message, metrics, Build inputs, and more.
If further investigation is needed, we package your build results in a .dockerbuild archive file. This file can be imported to the Build View in Docker Desktop, providing comprehensive build details, including timings, dependencies, logs, and traces.
Import and inspect GitHub Actions Builds in Docker Desktop
In Docker Desktop, navigate to the Builds View tab and use the new Import Builds button. Select the .dockerbuild file you downloaded to access all the details about your remote build as if you ran it locally (Figure 2).
Figure 2: Animated view of Docker Desktop, showing steps to navigate to the Builds View tab and use the new Import Builds button.
You can view in-depth information about your build execution, including error lines in your Dockerfile, build timings, cache utilization, and OpenTelemetry traces. This comprehensive view helps diagnose complex builds efficiently.
For example, you can see the stack trace right next to the Dockerfile command that is causing the issues, which is useful for understanding the exact step and attributes that caused the error (Figure 3).
Figure 3: Inspecting a build error in Builds View.
You can even see the commit and source information for the build and easily locate who made the change for more help in resolving the issue, along with other useful info you need for diagnosing even the most complicated builds (Figure 4).
Figure 4: Animated view of Docker Desktop showing info for inspecting an imported build, such as source details, build timing, dependencies, configuration, and more.
Enhance team collaboration
We aim to enhance team collaboration, allowing you to share and work together on Docker builds and optimizing the build experience for your team. These .dockerbuild archives are self-contained and don’t expire, making them perfect for team collaboration. Share the .dockerbuild file via Slack or email or attach it to GitHub issues or Jira tickets to preserve context for when your team investigates.
Get started
To start using Docker Build Summaryand the .dockerbuild archive in Docker Desktop, update your Docker Build GitHub Actions configuration to:
uses: docker/build-push-action@v6
uses: docker/bake-action@v5
Then, update to Docker Desktop 4.31 to inspect build archives from GitHub Actions. Learn more in the documentation.
We are incredibly excited about these new features, which will help you and your team diagnose and resolve build issues quickly. Please try them out and let us know what you think!
We are excited to announce that the latest BuildKit release, v0.13.0, contains experimental Windows Containers support. BuildKit has been around for many years and has been the default build engine on Linux since Docker Engine 23.0.0.
BuildKit is a toolkit for converting source code to build artifacts (like container images) in an efficient, expressive, and repeatable manner. BuildKit introduced the following benefits as compared with the previous Docker Builder:
Parallelize building independent build stages and skip any unused stages.
Incrementally transfer only the changed files in your build context between builds, also skip the transfer of unused files in your build context.
Use Dockerfile frontend implementations with many new features.
Avoid side effects with the rest of the API (intermediate images and containers).
Prioritize your build cache for automatic pruning.
Since 2018, Windows Container customers have been asking for Windows support for BuildKit, as seen in the BuildKit repo and Windows Containers repo, with hundreds of reactions and comments. We have listened to our users and focused resources in the past year to light up Windows Containers support on BuildKit.
Until now, we only shipped the Buildx client on Windows for building Linux images and some very limited Windows images using cross-compilation. Today, we are introducing experimental support for Windows Containers in BuildKit, with the aim of making this available soon in your standard Docker Build.
What’s next?
In the upcoming months, we will work toward further improvements, including:
General Availability (GA) ready: Improving release materials, including guides and documentation.
Integration with Docker Engine: So you can just run docker build.
OCI worker support: On Linux, there is an option to run BuildKit with only runc using the OCI worker. Currently, only the containerd worker is supported for Windows.
Container driver: Add support for running in the container driver.
Image outputs: Some image outputs supported by Linux may not work on Windows and need to be tested and assessed. These include exporting an image to multiple registries, checking if keys for image output are supported, and testing multi-platform image-building support.
Building other artifacts: BuildKit can be used to build other artifacts beyond container images. Work needs to be done in this area to cross-check whether other artifacts, such as binaries, libraries, and documentation, are also supported on Windows as it is on Linux.
Running buildkitd doesn’t require Admin: Currently, running buildkitd on Windows requires admin privileges. We will be looking into running buildkitd on low privileges, aka “rootless”.
Export cache: Investigations need to be done to confirm whether specific cache exporters (inline, registry, local, gha [GitHub Actions], 3, azblob) are also supported on Windows.
Linux parity: Identifying, accessing, and closing the feature parity gap between Windows and Linux.
Walkthrough — Build a basic “Hello World” image with BuildKit and Windows Containers
Let’s walk through the process of setting up BuildKit, including the necessary dependencies, and show how to build a basic Windows image. For feedback and issues, file a ticket at Issues · moby/buildkit (github.com) tagged with area/windows.
The platform requirements are listed below. In our scenario, we will be running a nanoserver:ltsc2022 base image with AMD64.
Architecture: AMD64, Arm64 (binaries available but not officially tested yet).
Supported operating systems: Windows Server 2019, Windows Server 2022, Windows 11.
Base images: servercore:ltsc2019, servercore:ltsc2022, nanoserver:ltsc2022. See the compatibility map.
The workflow will cover the following steps:
Enable Windows Containers.
Install containerd.
Install BuildKit.
Build a simple “Hello World” image.
1. Enable Windows Containers
Start a PowerShell terminal in admin privilege mode. Run the following command to ensure the Containers feature is enabled:
If you see RestartNeeded as True on your setup, restart your machine and reopen an Administrator PowerShell terminal (Figure 1). Otherwise, continue to the next step.
Figure 1: Enabling Windows Containers in PowerShell.
2. Install containerd
Next, we need to install containerd, which is used as the container runtime for managing containers and images.
Note: We currently only support the containerd worker. In the future, we plan to add support for the OCI worker, which uses runc and will therefore remove this dependency
Run the following script to install the latestcontainerd release. If you have containerd already installed, skip the script below and run Start-Service containerd to start the containerd service.
Note: containerd v1.7.7+ is required.
# If containerd previously installed run:
Stop-Service containerd
# Download and extract desired containerd Windows binaries
$Version="1.7.13" # update to your preferred version
curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
tar.exe xvf .\containerd-windows-amd64.tar.gz
# Copy and configure
Copy-Item -Path ".\bin" -Destination "$Env:ProgramFiles\containerd" -Recurse -Container:$false -Force
cd $Env:ProgramFiles\containerd\
.\containerd.exe config default | Out-File config.toml -Encoding ascii
# Copy
Copy-Item -Path .\bin\* -Destination (New-Item -Type Directory $Env:ProgramFiles\containerd -Force) -Recurse -Force
# add the binaries (containerd.exe, ctr.exe) in $env:Path
$Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + [IO.Path]::PathSeparator + "$Env:ProgramFiles\containerd"
[Environment]::SetEnvironmentVariable( "Path", $Path, "Machine")
# reload path, so you don't have to open a new PS terminal later if needed
$Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
# configure
containerd.exe config default | Out-File $Env:ProgramFiles\containerd\config.toml -Encoding ascii
# Review the configuration. Depending on setup you may want to adjust:
# - the sandbox_image (Kubernetes pause image)
# - cni bin_dir and conf_dir locations
Get-Content $Env:ProgramFiles\containerd\config.toml
# Register and start service
containerd.exe --register-service
Start-Service containerd
3. Install BuildKit
Note: Ensure you have updated to the latest version of Docker Desktop.
Run the following script to download and extract the latest BuildKit release.
$version = "v0.13.0" # specify the release version, v0.13+
$arch = "amd64" # arm64 binary available too
curl.exe -LO https://github.com/moby/buildkit/releases/download/$version/buildkit-$version.windows-$arch.tar.gz
# there could be another `.\bin` directory from containerd instructions
# you can move those
mv bin bin2
tar.exe xvf .\buildkit-$version.windows-$arch.tar.gz
## x bin/
## x bin/buildctl.exe
## x bin/buildkitd.exe
Next, run the following commands to add the BuildKit binaries to your Program Files directory, then add them to the PATH so they can be called directly.
# after the binaries are extracted in the bin directory
# move them to an appropriate path in your $Env:PATH directories or:
Copy-Item -Path ".\bin" -Destination "$Env:ProgramFiles\buildkit" -Recurse -Force
# add `buildkitd.exe` and `buildctl.exe` binaries in the $Env:PATH
$Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + `
[IO.Path]::PathSeparator + "$Env:ProgramFiles\buildkit"
[Environment]::SetEnvironmentVariable( "Path", $Path, "Machine")
$Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + `
[System.Environment]::GetEnvironmentVariable("Path","User")
Run buildkitd.exe. You should expect to see something as shown in Figure 2:
Figure 2: Successfully starting buildkitd without any errors in the logs.
Now we can set up buildx (the BuildKit client) to use our BuildKit instance. Here we will create a Builder item that points to our Buildkit instance we just started, by running:
Here we are creating a new instance of a builder and pointing it to our BuildKit instance. BuildKit will listen on npipe:////./pipe/buildkitd.
Notice that we also name the builder, here, we call it buildkit-exp, but you can name it whatever you want. Just remember to add --use to set this as the current builder.
Let’s test our connection by running docker buildx inspect (Figure 3):
Figure 3: Docker buildx inspect shows that our new builder is connected.
FROM mcr.microsoft.com/windows/nanoserver:ltsc2022
COPY hello.txt C:
CMD ["cmd", "/C", "type C:\\hello.txt"]
Run the following commands to create a directory and change directory to sample_dockerfile.
mkdir sample_dockerfile
cd sample_dockerfile
Run the following script to add the Dockerfile shown above and hello.txt to the sample_dockerfile directory.
Set-Content Dockerfile @"
FROM mcr.microsoft.com/windows/nanoserver:ltsc2022
USER ContainerAdministrator
COPY hello.txt C:/
RUN echo "Goodbye!" >> hello.txt
CMD ["cmd", "/C", "type C:\\hello.txt"]
"@
Set-Content hello.txt @"
Hello from buildkit!
This message shows that your installation appears to be working correctly.
"@
Now we can use buildx to build our image and push it to the registry (see Figure 5):
Figure 5: Here we can see our build running to a successful completion.
If you are utilizing Docker Hub as your registry, run docker login before running buildx build (Figure 6).
Figure 6: Successful login to Docker Hub so we can publish our images.
Congratulations! You can now run containers with standard docker run:
docker run <HUB ACCOUNT NAME>/hello-buildkit
Get started with BuildKit
We encourage you to test out the released experimental Windows BuildKit support v0.13.0. To start out, feel free to follow the documentation or blog, which will walk you through building a simple Windows image with BuildKit. File feedback and issues at Issues · moby/buildkit (github.com) tagged with area/windows.
As an engineer in a product development team, your primary focus is innovating new services to push the organization forward. We know how frustrating it is to be blocked because of a failing Docker build or to have the team be slowed down because of an unknown performance issue in your builds.
Due to the complex nature of some builds, understanding what is happening with a build can be tricky, especially if you are new to Docker and containerization.
To help solve these issues, we are excited to announce the new Builds view in Docker Desktop, which provides detailed insight into your build performance and usage. Get a live view of your builds as they run, explore previous build performance, and deep dive into an error and cache issue.
What is causing my build to fail?
The Builds view lets you look through recent and past builds to diagnose a failure long after losing the logs in your terminal. Once you have found the troublesome build, you can explore all the runtime context of the build, including any arguments and the full Dockerfile. The UI provides you with the full build log, so you no longer need to go back and re-run the build with --progress=plain to see exactly what happened (Figure 1).
Figure 1: A past Docker build’s logs showing an error in one of the steps.
You can see the stack trace right next to the Dockerfile command that is causing the issues, which is useful for understanding the exact step and attributes that caused the error (Figure 2).
Figure 2: A view of a Dockerfile with a stack trace under a step that failed.
You can also check whether this issue has happened before or look at what changed to cause it. A jump in run time compared to the baseline can be seen by inspecting previous builds for this project and viewing what changed (Figure 3).
Figure 3: The build history view showing timing information, caching information, and completion status for historic builds of the same image.
What happened to the caching?
We often hear about how someone in the team made a change, impacting the cache utilization. The longer such a change goes unnoticed, the harder it can be to locate what happened and when.
The Builds view plots your build duration alongside cache performance. Now, it’s easy to see a spike in build times aligned with a reduction in cache utilization (Figure 4).
Figure 4: Enlarged view of the build history calling out the cache hit ratio for builds of the same image.
You can click on the chart or select from the build history to explore what changed before and after the degradation in performance. The Builds view keeps all the context from your builds, the Dockerfile, the logs, and all execution information (Figure 5).
Figure 5: An example of a Dockerfile for a historic build of an image that lets you compare what changed over time.
You can even see the commit and source information for the build and easily locate who made the change for more help in resolving the issue (Figure 6).
Figure 6: The info view of a historic build of an image showing the location of the Git repository being used and the digest of the commit that was built.
An easier way to manage builders
Previously, users have been able to manage builders from the CLI, providing a flexible method for setting up multiple permutations of BuildKit.
Although this approach is powerful, it would require many commands to fully inspect and manage all the details for your different builders. So, as part of our efforts to continuously make things easier for developers, we added a builder management screen with Docker Desktop (Figure 7).
Figure 7: The builder inspection view, showing builder configuration and storage utilization.
All the important information about your builders is available in an easy-to-use dashboard, accessible via the Builds view (or from settings). Now, you can quickly see your storage utilization and inspect the configuration.
Figure 8: Conveniently start, stop, and switch your default builder.
You can also switch your default builder and easily start and stop them (Figure 8). Now, instead of having to look up which command-line options to call, you can quickly select from the drop-down menu.
Get started
The new Builds view is available in the new Docker Desktop 4.26 release; upgrade and click on the new Builds tab in the Dashboard menu.
We are excited about the new Builds view, but this is just the start. There are many more features in the pipeline, but we would love to hear what you think.
Give Builds view a try and share your feedback on the app. We would also love to chat with you about your experience so we can make the best possible product for you.