We’re excited to announce that Docker Desktop is now available on the Microsoft Store! This new distribution channel enhances both the installation and update experience for individual developers while significantly simplifying management for enterprise IT teams.
This milestone reinforces our commitment to Windows, our most widely used platform among Docker Desktop users. By partnering with the Microsoft Store, we’re ensuring seamless compatibility with enterprise management tools while delivering a more consistent experience to our shared customers.
Automatic Updates: The Microsoft Store handles all update processes automatically, ensuring you’re always running the latest version without manual intervention.
Streamlined Installation: Experience a more reliable setup process with fewer startup errors..
Unified Management: Manage Docker Desktop alongside your other applications in one familiar interface.
Centralized Control: Easily roll out Docker Desktop through the Microsoft Store’s enterprise distribution channels.
Security-Compatible Updates: Updates are handled automatically by the Microsoft Store infrastructure, even in organizations where users don’t have direct store access.
Updates Without Direct Store Access: The native integration with Intune allows automatic updates to function even when users don’t have Microsoft Store access — a significant advantage for security-conscious organizations with restricted environments.
Familiar Workflow: The update mechanism works similarly to winget commands (winget install –id=XP8CBJ40XLBWKX –source=msstore), providing consistency with other enterprise software management.
Why it matters for businesses and developers
With 99% of enterprise users not running the latest version of Docker Desktop, the Microsoft Store’s automatic update capabilities directly address compliance and security concerns while minimizing downtime. IT administrators can now:
Increase Productivity: Developers can focus on innovation instead of managing installations.
Improve Operational Efficiency: Better control over Docker Desktop deployments reduces IT bottlenecks.
Enhance Compliance: Automatic updates and secure installations support enterprise security protocols.
Conclusion
Docker Desktop’s availability on the Microsoft Store represents a significant step forward in simplifying how organizations deploy and maintain development environments. By focusing on seamless updates, reliability, and enterprise-grade management, Docker and Microsoft are empowering teams to innovate with greater confidence.
It’s now been a couple of weeks since we released the new Docker DX extension for Visual Studio Code. This launch reflects a deeper collaboration between Docker and Microsoft to better support developers building containerized applications.
Over the past few weeks, you may have noticed some changes to your Docker extension in VS Code. We want to take a moment to explain what’s happening—and where we’re headed next.
What’s Changing?
The original Docker extension in VS Code is being migrated to the newContainer Tools extension, maintained by Microsoft. It’s designed to make it easier to build, manage, and deploy containers—streamlining the container development experience directly inside VS Code.
As part of this partnership, it was decided to bundle the new Docker DX extension with the existing Docker extension, so that it would install automatically to make the process seamless.
While the automatic installation was intended to simplify the experience, we realize it may have caught some users off guard. To provide more clarity and choice, the next release will make Docker DX Extension an opt-in installation, giving you full control over when and how you want to use it.
What’s New from Docker?
Docker is introducing the newDocker DX extension, focused on delivering a best-in-class authoring experience for Dockerfiles, Compose files, and Bake files
Key features include:
Dockerfile linting: Get build warnings and best-practice suggestions directly from BuildKit and Buildx—so you can catch issues early, right inside your editor.
Image vulnerability remediation (experimental): Automatically flag references to container images with known vulnerabilities, directly in your Dockerfiles.
Bake file support: Enjoy code completion, variable navigation, and inline suggestions when authoring Bake files—including the ability to generate targets based on your Dockerfile stages.
Compose file outline: Easily navigate and understand complex Compose files with a new outline view in the editor.
Better Together
These two extensions are designed to work side-by-side, giving you the best of both worlds:
Powerful tooling to build, manage, and deploy your containers
Smart, contextual authoring support for Dockerfiles, Compose files, and Bake files
And the best part? Both extensions are free and fully open source.
Thank You for Your Patience
We know changes like this can be disruptive. While our goal was to make the transition as seamless as possible, we recognize that the approach caused some confusion, and we sincerely apologize for the lack of early communication.
The teams at Docker and Microsoft are committed to delivering the best container development experience possible—and this is just the beginning.
Where Docker DX is Going Next
At Docker, we’re proud of the contributions we’ve made to the container ecosystem, including Dockerfiles, Compose, and Bake.
We’re committed to ensuring the best possible experience when editing these files in your IDE, with instant feedback while you work.
Here’s a glimpse of what’s coming:
Expanded Dockerfile checks: More best-practice validations, actionable tips, and guidance—surfaced right when you need them.
Stronger security insights: Deeper visibility into vulnerabilities across your Dockerfiles, Compose files, and Bake configurations.
Improved debugging and troubleshooting: Soon, you’ll be able to live debug Docker builds—step through your Dockerfile line-by-line, inspect the filesystem at each stage, see what’s cached, and troubleshoot issues faster.
We Want Your Feedback!
Your feedback is critical in helping us improve the Docker DX extension and your overall container development experience.
If you encounter any issues or have ideas for enhancements you’d like to see, please let us know:
Today, we are excited to announce the release of a new, open-source Docker Language Server and Docker DX VS Code extension. In a joint collaboration between Docker and the Microsoft Container Tools team, this new integration enhances the existing Docker extension with improved Dockerfile linting, inline image vulnerability checks, Docker Bake file support, and outlines for Docker Compose files. By working directly with Microsoft, we’re ensuring a native, high-performance experience that complements the existing developer workflow. It’s the next evolution of Docker tooling in VS Code — built to help you move faster, catch issues earlier, and focus on what matters most: building great software.
What’s the Docker DX extension?
The Docker DX extension is focused on providing developers with faster feedback as they edit. Whether you’re authoring a complex Compose file or fine-tuning a Dockerfile, the extension surfaces relevant suggestions, validations, and warnings in real time.
Key features include:
Dockerfile linting: Get build warnings and best-practice suggestions directly from BuildKit and Buildx.
Image vulnerability remediation (experimental): Flags references to container images with known vulnerabilities directly in Dockerfiles.
Bake file support: Includes code completion, variable navigation, and inline suggestions for generating targets based on your Dockerfile stages.
Compose file outline: Easily navigate complex Compose files with an outline view in the editor.
If you’re already using the Docker VS Code extension, the new features are included — just update the extension and start using them!
Dockerfile linting and vulnerability remediation
The inline Dockerfile linting provides warnings and best-practice guidance for writing Dockerfiles from the experts at Docker, powered by Build Checks. Potential vulnerabilities are highlighted directly in the editor with context about their severity and impact, powered by Docker Scout.
Figure 1: Providing actionable recommendations for fixing vulnerabilities and optimizing Dockerfiles
Early feedback directly in Dockerfiles keeps you focused and saves you and your team time debugging and remediating later.
Docker Bake files
The Docker DX extension makes authoring and editing Docker Bake files quick and easy. It provides code completion, code navigation, and error reporting to make editing Bake files a breeze. The extension will also look at your Dockerfile and suggest Bake targets based on the build stages you have defined in your Dockerfile.
Figure 2: Editing Bake files is simple and intuitive with the rich language features that the Docker DX extension provides.
Figure 3: Creating new Bake files is straightforward as your Dockerfile’s build stages are analyzed and suggested as Bake targets.
Compose outlines
Quickly navigate complex Compose files with the extension’s support for outlines available directly through VS Code’s command palette.
Figure 4: Navigate complex Compose files with the outline panel.
Don’t use VS Code? Try the Language Server!
The features offered by the Docker DX extension are powered by the brand-new Docker Language Server, built on the Language Server Protocol (LSP). This means the same smart editing experience — like real-time feedback, validation, and suggestions for Dockerfiles, Compose, and Bake files — is available in your favorite editor.
Share your feedback on how it’s working for you, and share what features you’d like to see next. If you’d like to learn more or contribute to the project, check out our GitHub repo.
Over the past year, Microsoft developments with AutoGen have underscored the remarkable capabilities of agentic AI and multi-agent systems. Microsoft is thrilled to unveil AutoGen v0.4 , a major update shaped by invaluable feedback from our vibrant community of users and developers. This release marks a comprehensive overhaul of the AutoGen library, designed to elevate […]
Phi-4, Microsoft’s latest small language model (SLM), is a groundbreaking 14B parameter model that outperforms comparable and larger models on math-related reasoning tasks. A small language model (SLM) is an artificial intelligence (AI) model that can understand, process, and generate human language. SLMs are similar to large language models (LLMs), but are smaller and less […]
Docker Desktop 4.34 introduces key features to enhance security, scalability, and productivity for all development team sizes, making deploying and managing environments more straightforward. With the general availability (GA) of the MSI installer for bulk deployment, managing installations across Windows environments becomes even simpler. Enhanced authentication features offer an improved administration experience while reinforcing security. Automatically reclaim valuable disk space with Docker Desktop’s new smart compaction feature, streamlining storage management for WSL2 users. Additionally, the integration with NVIDIA AI Workbench provides developers with a seamless connection between model training and local development. Explore how these innovations simplify your workflows and foster a culture of innovation and reliability in your development practices.
Deploy Docker Desktop in bulk with the MSI installer
We’re excited to announce that the MSI installer for Docker Desktop is now generally available to all our Docker Business customers. This powerful tool allows you to customize and deploy Docker Desktop across multiple users or machines in an enterprise environment, making it easier to manage Docker at scale.
Features include:
Interactive and silent installations: Choose between an interactive setup process or deploy silently across your organization without interrupting your users.
Customizable installation paths: Tailor the installation location to fit your organization’s needs.
Desktop shortcuts and automatic startup: Simplify access for users with automatic creation of desktop shortcuts and Docker Desktop starting automatically after installation.
Set usage to specific Docker Hub organizations: Control which Docker Hub organizations your users are tied to during installation.
Docker administrators can download the MSI installer directly from the Docker Admin Console.
One of the standout features of this installer is the --allowed-org flag. This option enables the creation of a Windows registry key during installation, enforcing sign-in to a specified organization. By requiring sign-in, you ensure that your developers are using Docker Desktop with their corporate credentials, fully leveraging your Docker Business subscription. This also adds an extra layer of security, protecting your software supply chain.
Additionally, this feature paves the way for Docker to provide you with valuable usage insights across your organization and enable cloud-based control over application settings for every user in your organization in the future.
Figure 1: Docker admins can download the MSI installer directly from the Docker Admin Console.
What’s next
We’re also working on releasing a PKG enterprise installer for macOS, config profiles for macOS, and supporting multiple organizations in all supported sign-in enforcement mechanisms.
Previously, Docker Desktop lacked seamless host networking capability, complicating the integration between host and container network services. Developers had to take time to set up and enable communication between the host and containers. Docker Desktop now supports host networking capability directly into Docker Desktop.
Host networking allows containers that are started with --net=host to use localhost to connect to TCP and UDP services on the host. It will automatically allow software on the host to use localhost to connect to TCP and UDP services in the container. This simplifies the setup for scenarios in which close integration between host and container network services is required. Additionally, we’re driving cross-platform consistency and simplifying configuration by reducing the need for additional steps, such as setting up port forwarding or bridge networks.
While this has previously been available in the Docker Engine, we’re now extending this capability to Docker Desktop for Windows, macOS, and Linux. We’re dedicated to improving developer productivity, and this is another way we help developers spend less time configuring network settings and more time building and testing applications, accelerating development cycles.
This new capability is available for all users logged into Docker Desktop. To enable this feature, navigate to Settings > Resources > Network. Learn more about this feature on Docker Docs.
Figure 2: Enable the host networking support feature in the Settings menu.
Automatic reclamation of disk space in Docker Desktop for WSL2
Previously, when customers using Docker Desktop for WSL2 deleted Docker objects such as containers, images, or builds (for example via a docker system prune), the freed storage space was not automatically reclaimed on their host. Instead, they had to use external tools to “compact” the virtual disk/distribution backing Docker Desktop.
Starting with Docker 4.34, we are rolling out automatic reclamation of disk space. When you quit the app, Docker Desktop will automatically check whether there is storage space that can be returned to the host. It will then scan the virtual disk used for Docker storage, and compact it by returning all zeroed blocks to the operating system. Currently Docker Desktop will only start the scan when it estimates that at least 16GB of space can be returned. In the future, we plan to make this threshold adaptive and configurable by the user.
The feature is now enabled for all customers running the Mono distribution architecture for Docker Desktop on WSL2. This new architecture, which was rolled out starting with Docker Desktop 4.30 for all fresh installations of Docker Desktop, removed the need for a dedicated docker-desktop-data WSL2 distribution to store docker data. We will be rolling out the new architecture to all customers in the upcoming Docker Desktop releases.
Customers with installations still using the docker-desktop-data WSL2 distribution can compact storage manually via VHDX compaction tools, or change the WSL2 configuration to enable the experimental WSL2 feature for disk cleanup.
(Pro tip: Did you know you can use the Disk Usage extension to see how Docker Desktop is using your storage and use it to prune dangling objects with a single click?)
Authentication enhancements
Previously, authenticating via the CLI required developers to either type their password into the command-line interface — which should generally be avoided by the security-minded — or manually create a personal access token (PAT) by navigating to their Docker account settings, generating the token, and then copying it into the CLI for authentication. This process was time-consuming and forced developers to switch contexts between the CLI and the web portal.
In this latest Docker Desktop release, we’re streamlining the CLI authentication flow. Now, users can authenticate through a seamless browser-based process, similar to the experience in CLIs like GitHub’s gh or Amazon’s AWS CLI. With this improved flow, typing docker login in the CLI will print a confirmation code and open your browser for authentication, automating PAT creation behind the scenes and eliminating the need for manual PAT provisioning. This enhancement saves time, reduces complexity, and delivers a smoother and more secure user experience. Additionally, when you authenticate using this workflow, you’ll be logged in across both Docker CLI and Docker Desktop.
This new flow also supports developers in organizations that require single sign-on (SSO), ensuring a consistent and secure authentication process.
Figure 3: When you log in via the new workflow, you’ll be logged in across both Docker CLI and Docker Desktop.
Enterprise-grade AI application development with Docker Desktop and NVIDIA AI Workbench
AI development is a complex journey, often hindered by the challenge of connecting the dots between model training, local development, and deployment. Developers frequently encounter a fragmented and inconsistent development environment and toolchain, making it difficult to move seamlessly from training models in the cloud to running them locally. This fragmentation slows down innovation, introduces errors, and complicates the end-to-end development process.
To solve this, we’re proud to announce the integration of Docker Desktop with NVIDIA AI Workbench, a collaboration designed to streamline every stage of AI development. This solution brings together the power of Docker’s containerization with NVIDIA’s leading AI tools, providing a unified environment that bridges the gap between model training and local development.
With this integration, you can now train models in the cloud using NVIDIA’s robust toolkit and effortlessly transition to local development on Docker Desktop. This eliminates the friction of managing different environments and configurations, enabling a smoother, more efficient workflow from start to finish.
To learn more about this collaboration and how Docker Business supports enterprise-grade AI application development, read our blog post.
Multi-platform UX improvements and the containerd image store
In February 2024, we announced the general availability of the containerd image store in Docker Desktop. Since then, we’ve been working on improving the output of our commands to make multi-platform images easier to view and manage.
Now, we are happy to announce that the docker image list CLI command now supports an experimental --tree flag. This offers a completely new tree view of the image list, which is more suitable for describing multi-platform images.
Figure 4: New CLI tree view of the image list.
If you’re looking for multi-platform support, you need to ensure that you have the containerd image store enabled in Docker Desktop (see General settings in Docker Desktop, select Use containerd for pulling and storing images). As of the Docker Desktop 4.34 release, fresh installs or factory resets of Docker Desktop will now default to using the containerd image store, meaning that you get multi-platform building capability out of the box.
Figure 5: You can enable the containerd image store in the Docker Desktop general settings.
Docker Desktop 4.34 marks a significant milestone in our commitment to providing an industry-leading container development suite. With key features such as the MSI installer for bulk deployment, enhanced authentication mechanisms, and the integration with NVIDIA AI Workbench, Docker Desktop is transforming how teams manage deployments, protect their environments, and accelerate their development workflows.
These advancements simplify your development processes and help drive a culture of innovation and reliability. Stay tuned for more exciting updates and enhancements as we continue to deliver solutions designed to empower your development teams and secure your operations at scale.
Upgrade to Docker Desktop 4.34 today and experience the future of container development.
Learn more
Authenticate and update to receive your subscription level’s newest Docker Desktop features.
Microsoft Research and Stanford University have unveiled Trace, a novel Python framework designed to revolutionize AI system optimization. This new tool focuses on automating the design and updating of AI workflows, such as coding assistants and chatbots, by treating them as computational graphs. The OptoPrime algorithm is tailored for solving the OPTO problem, utilizing the […]
Windows is back! That is my big takeaway from Microsoft Build last week. In recent years, Microsoft has focused on a broader platform that includes Windows and Linux and has adapted to the centrality of the browser in the modern world. But last week’s event was dominated by the launch of the Copilot+ PC, initially launched with Arm-based machines. We announced Docker Desktop support for Windows on Arm (long-awaited by many of you!) to accompany this exciting development.
The buzz around Arm-based machines
Sadly, we did not get to try any of the new hardware in-depth, but there was a lot of love and longing for the Snapdragon Dev Kit from those who had tried it and our team back home. Arm Windows machines will ship from major manufacturers soon. Developers are power users of their machines, and AI has pushed up the local performance requirements, which means more, faster machines sooner. What’s not to like? (Well, the Recall feature preview won that prize.)
Copilots everywhere
It wasn’t all about Windows. Copilots were everywhere, including the opening keynote and announcing our partner collaboration with Docker’s extension for GitHub Copilot. If you missed it and thought Copilot was just the original assistant from GitHub, now there are 365 Copilots for everything from Excel to Power BI to Minecraft. Just emerging is the ability to build your own Copilots and an ecosystem of Copilots. Docker launched in the first wave of Copilot integrations, initially integrating into GitHub Copilot chat — with more to come. Check out our blog post for more on how the extension can help you with Dockerfiles and Compose files and how to use Docker.
Satya Nadella presents GitHub Copilot Extensions, including Docker, at Microsoft Build 2024.
Connecting with the community
The event’s vibe wasn’t just about the launches; it was about connecting with the people. As a hybrid event, Microsoft Build had a lively ongoing broadcast that was great fun and was being produced right across from the Docker booth.
The Docker booth was constantly busy, with a stream of people with questions, requests, problems, and ideas, ranging from new Docker users to experienced dockhands and those checking out our new products, like Docker Build Cloud, learning more about how that can Secure Dockerized apps in the Microsoft ecosystem, and getting hands-on with features like Docker Debug in Docker Desktop.
Justin Cormack recording in front of the Docker booth at Microsoft Build 2024.
Microsoft Build was a fantastic opportunity to showcase our innovations and connect with the Microsoft developer community. We are excited about the solutions we are bringing to the Microsoft ecosystem and look forward to continuing our collaboration to enhance the developer experience with Docker and Microsoft’s better-together solutions.
Join an exclusive interview with Docker's CTO, Justin Cormack, discussing how Docker is revolutionizing the SDLC. Learn about streamlining workflows, enhanci...
We are excited to announce that the latest BuildKit release, v0.13.0, contains experimental Windows Containers support. BuildKit has been around for many years and has been the default build engine on Linux since Docker Engine 23.0.0.
BuildKit is a toolkit for converting source code to build artifacts (like container images) in an efficient, expressive, and repeatable manner. BuildKit introduced the following benefits as compared with the previous Docker Builder:
Parallelize building independent build stages and skip any unused stages.
Incrementally transfer only the changed files in your build context between builds, also skip the transfer of unused files in your build context.
Use Dockerfile frontend implementations with many new features.
Avoid side effects with the rest of the API (intermediate images and containers).
Prioritize your build cache for automatic pruning.
Since 2018, Windows Container customers have been asking for Windows support for BuildKit, as seen in the BuildKit repo and Windows Containers repo, with hundreds of reactions and comments. We have listened to our users and focused resources in the past year to light up Windows Containers support on BuildKit.
Until now, we only shipped the Buildx client on Windows for building Linux images and some very limited Windows images using cross-compilation. Today, we are introducing experimental support for Windows Containers in BuildKit, with the aim of making this available soon in your standard Docker Build.
What’s next?
In the upcoming months, we will work toward further improvements, including:
General Availability (GA) ready: Improving release materials, including guides and documentation.
Integration with Docker Engine: So you can just run docker build.
OCI worker support: On Linux, there is an option to run BuildKit with only runc using the OCI worker. Currently, only the containerd worker is supported for Windows.
Container driver: Add support for running in the container driver.
Image outputs: Some image outputs supported by Linux may not work on Windows and need to be tested and assessed. These include exporting an image to multiple registries, checking if keys for image output are supported, and testing multi-platform image-building support.
Building other artifacts: BuildKit can be used to build other artifacts beyond container images. Work needs to be done in this area to cross-check whether other artifacts, such as binaries, libraries, and documentation, are also supported on Windows as it is on Linux.
Running buildkitd doesn’t require Admin: Currently, running buildkitd on Windows requires admin privileges. We will be looking into running buildkitd on low privileges, aka “rootless”.
Export cache: Investigations need to be done to confirm whether specific cache exporters (inline, registry, local, gha [GitHub Actions], 3, azblob) are also supported on Windows.
Linux parity: Identifying, accessing, and closing the feature parity gap between Windows and Linux.
Walkthrough — Build a basic “Hello World” image with BuildKit and Windows Containers
Let’s walk through the process of setting up BuildKit, including the necessary dependencies, and show how to build a basic Windows image. For feedback and issues, file a ticket at Issues · moby/buildkit (github.com) tagged with area/windows.
The platform requirements are listed below. In our scenario, we will be running a nanoserver:ltsc2022 base image with AMD64.
Architecture: AMD64, Arm64 (binaries available but not officially tested yet).
Supported operating systems: Windows Server 2019, Windows Server 2022, Windows 11.
Base images: servercore:ltsc2019, servercore:ltsc2022, nanoserver:ltsc2022. See the compatibility map.
The workflow will cover the following steps:
Enable Windows Containers.
Install containerd.
Install BuildKit.
Build a simple “Hello World” image.
1. Enable Windows Containers
Start a PowerShell terminal in admin privilege mode. Run the following command to ensure the Containers feature is enabled:
If you see RestartNeeded as True on your setup, restart your machine and reopen an Administrator PowerShell terminal (Figure 1). Otherwise, continue to the next step.
Figure 1: Enabling Windows Containers in PowerShell.
2. Install containerd
Next, we need to install containerd, which is used as the container runtime for managing containers and images.
Note: We currently only support the containerd worker. In the future, we plan to add support for the OCI worker, which uses runc and will therefore remove this dependency
Run the following script to install the latestcontainerd release. If you have containerd already installed, skip the script below and run Start-Service containerd to start the containerd service.
Note: containerd v1.7.7+ is required.
# If containerd previously installed run:
Stop-Service containerd
# Download and extract desired containerd Windows binaries
$Version="1.7.13" # update to your preferred version
curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz
tar.exe xvf .\containerd-windows-amd64.tar.gz
# Copy and configure
Copy-Item -Path ".\bin" -Destination "$Env:ProgramFiles\containerd" -Recurse -Container:$false -Force
cd $Env:ProgramFiles\containerd\
.\containerd.exe config default | Out-File config.toml -Encoding ascii
# Copy
Copy-Item -Path .\bin\* -Destination (New-Item -Type Directory $Env:ProgramFiles\containerd -Force) -Recurse -Force
# add the binaries (containerd.exe, ctr.exe) in $env:Path
$Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + [IO.Path]::PathSeparator + "$Env:ProgramFiles\containerd"
[Environment]::SetEnvironmentVariable( "Path", $Path, "Machine")
# reload path, so you don't have to open a new PS terminal later if needed
$Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User")
# configure
containerd.exe config default | Out-File $Env:ProgramFiles\containerd\config.toml -Encoding ascii
# Review the configuration. Depending on setup you may want to adjust:
# - the sandbox_image (Kubernetes pause image)
# - cni bin_dir and conf_dir locations
Get-Content $Env:ProgramFiles\containerd\config.toml
# Register and start service
containerd.exe --register-service
Start-Service containerd
3. Install BuildKit
Note: Ensure you have updated to the latest version of Docker Desktop.
Run the following script to download and extract the latest BuildKit release.
$version = "v0.13.0" # specify the release version, v0.13+
$arch = "amd64" # arm64 binary available too
curl.exe -LO https://github.com/moby/buildkit/releases/download/$version/buildkit-$version.windows-$arch.tar.gz
# there could be another `.\bin` directory from containerd instructions
# you can move those
mv bin bin2
tar.exe xvf .\buildkit-$version.windows-$arch.tar.gz
## x bin/
## x bin/buildctl.exe
## x bin/buildkitd.exe
Next, run the following commands to add the BuildKit binaries to your Program Files directory, then add them to the PATH so they can be called directly.
# after the binaries are extracted in the bin directory
# move them to an appropriate path in your $Env:PATH directories or:
Copy-Item -Path ".\bin" -Destination "$Env:ProgramFiles\buildkit" -Recurse -Force
# add `buildkitd.exe` and `buildctl.exe` binaries in the $Env:PATH
$Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + `
[IO.Path]::PathSeparator + "$Env:ProgramFiles\buildkit"
[Environment]::SetEnvironmentVariable( "Path", $Path, "Machine")
$Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + `
[System.Environment]::GetEnvironmentVariable("Path","User")
Run buildkitd.exe. You should expect to see something as shown in Figure 2:
Figure 2: Successfully starting buildkitd without any errors in the logs.
Now we can set up buildx (the BuildKit client) to use our BuildKit instance. Here we will create a Builder item that points to our Buildkit instance we just started, by running:
Here we are creating a new instance of a builder and pointing it to our BuildKit instance. BuildKit will listen on npipe:////./pipe/buildkitd.
Notice that we also name the builder, here, we call it buildkit-exp, but you can name it whatever you want. Just remember to add --use to set this as the current builder.
Let’s test our connection by running docker buildx inspect (Figure 3):
Figure 3: Docker buildx inspect shows that our new builder is connected.
FROM mcr.microsoft.com/windows/nanoserver:ltsc2022
COPY hello.txt C:
CMD ["cmd", "/C", "type C:\\hello.txt"]
Run the following commands to create a directory and change directory to sample_dockerfile.
mkdir sample_dockerfile
cd sample_dockerfile
Run the following script to add the Dockerfile shown above and hello.txt to the sample_dockerfile directory.
Set-Content Dockerfile @"
FROM mcr.microsoft.com/windows/nanoserver:ltsc2022
USER ContainerAdministrator
COPY hello.txt C:/
RUN echo "Goodbye!" >> hello.txt
CMD ["cmd", "/C", "type C:\\hello.txt"]
"@
Set-Content hello.txt @"
Hello from buildkit!
This message shows that your installation appears to be working correctly.
"@
Now we can use buildx to build our image and push it to the registry (see Figure 5):
Figure 5: Here we can see our build running to a successful completion.
If you are utilizing Docker Hub as your registry, run docker login before running buildx build (Figure 6).
Figure 6: Successful login to Docker Hub so we can publish our images.
Congratulations! You can now run containers with standard docker run:
docker run <HUB ACCOUNT NAME>/hello-buildkit
Get started with BuildKit
We encourage you to test out the released experimental Windows BuildKit support v0.13.0. To start out, feel free to follow the documentation or blog, which will walk you through building a simple Windows image with BuildKit. File feedback and issues at Issues · moby/buildkit (github.com) tagged with area/windows.
Docker Desktop now supports running on Windows on Arm (WoA) devices. This exciting development was unveiled during Microsoft’s “Introducing the Next Generation of Windows on Arm” session at Microsoft Build. Docker CTO, Justin Cormack, highlighted how this strategic move will empower developers with even more rapid development capabilities, leveraging Docker Desktop on Arm-powered Windows devices.
The Windows on Arm platform is redefining performance and user experience for applications. With this integration, Docker Desktop extends its reach to a new wave of hardware architectures, broadening the horizons for containerized application development.
Justin Cormack announcing Docker Desktop support for Windows on Arm devices with Microsoft Principal TPM Manager Jamshed Damkewala in the Microsoft Build session “Introducing the next generation of Windows on Arm.”
Docker Desktop support for Windows on Arm
Read on to learn why Docker Desktop support for Windows on Arm is a game changer for developers and organizations.
Broader accessibility
By supporting Arm devices, Docker Desktop becomes accessible to a wider audience, including users of popular Arm-based devices like the Microsoft devices. This inclusivity fosters a larger, more diverse Docker community, enabling more developers to harness the power of containerization on their preferred devices.
Enhanced developer experience
Developers can seamlessly work on the newest Windows on Arm devices, streamlining the development process and boosting productivity. Docker Desktop’s consistent, cross-platform experience ensures that development workflows remain smooth and efficient, regardless of the underlying hardware architecture.
Future-proofing development
As the tech industry gradually shifts toward Arm architecture for its efficiency and lower power consumption, Docker Desktop’s support for WoA devices ensures we remain at the forefront of innovation. This move future-proofs Docker Desktop, keeping it relevant and competitive as this transition accelerates.
Innovation and experimentation
With Docker Desktop on a new architecture, developers and organizations have more opportunities to innovate and experiment. Whether designing applications for traditional x64 or the emerging Arm ecosystems, Docker Desktop offers a versatile platform for creative exploration.
Market expansion
Furthering compatibility in the Windows Arm space opens new markets and opportunities for Docker, including new relationships with device manufacturers and increased adoption in sectors prioritizing energy efficiency and portability while supporting Docker’s users and customers in leveraging the dev environments that support their goals.
Accelerating developer innovation with Microsoft’s investment in WoA dev tooling
Windows on Arm is arguably as successful as it has ever been. Today, multiple Arm-powered Windows laptops and tablets are available, capable of running nearly the entire range of Windows apps thanks to x86-to-Arm code translation. While Windows on Arm still represents a small fraction of the entire Windows ecosystem, the development of native Arm apps provides a wealth of fresh opportunities for AI innovation.
Microsoft’s investments align with Docker’s strategic goals of cross-platform compatibility and user-centric development, ensuring Docker remains at the forefront of containerization technologies in a diversifying hardware landscape.
Expand your development landscape with Docker Desktop on Windows Arm devices. Update to Docker Desktop 4.31 or consider upgrading to Pro or Business subscriptions to unlock the full potential of cross-platform containerization. Embrace the future of development with Docker, where innovation, efficiency, and cross-platform compatibility drive progress.
We are thrilled to announce Docker’s participation at Microsoft Build, which will be held May 21-23 in Seattle, Washington, and online. We’ll showcase how our deep collaboration with Microsoft is revolutionizing the developer experience. Join us to discover the newest and upcoming solutions that enhance productivity, secure applications, and accelerate the development of AI-driven applications.
Our presence at Microsoft Build is more than just a showcase — it’s a portal to the future of application development. Visit our booth to interact with Docker experts, experience live demos, and explore the powerful capabilities of Docker Desktop and other Docker products. Whether you’re new to Docker or looking to deepen your expertise, our team is ready to help you unlock new opportunities in your development projects.
Sessions featuring Docker
Optimizing the Microsoft Developer Experience with Docker: Dive into our partnership with Microsoft and learn how to leverage Docker in Azure, Windows, and Dev Box environments to streamline your development processes. This session is your key to mastering the inner loop of development with efficiency and innovation.
Shifting Test Left with Docker and Microsoft: Learn how to address app quality challenges before the continuous integration stage using Tescontainers Cloud and Docker Debug. Discover how these tools aid in rapid and effective debugging, enabling you to streamline the debugging process for both active and halted containers and create testing efficiencies at scale.
Securing Dockerized Apps in the Microsoft Ecosystem: Learn about Docker’s integrated tools for securing your software supply chain in Microsoft environments. This session is essential for developers aiming to enhance security and compliance while maintaining agility and innovation.
Innovating the SDLC with Insights from Docker CTO Justin Cormack: In this interview, Docker’s CTO will share insights on advancing the SDLC through Docker’s innovative toolsets and partnerships. Watch Thursday 1:45pm PT from the Microsoft Build stage or our Featured Partner page.
Introducing the Next Generation of Windows on ARM: Experience a special session featuring Docker CTO Justin Cormack as he discusses Docker’s role in expanding the Windows on ARM64 ecosystem, alongside a Microsoft executive.
Where to find us
You can also visit us at Docker booth #FP29 to get hands-on experience and view demos of some of our newest solutions.
If you cannot attend in person, the MSBuild online experience is free. Explore our Microsoft Featured Partner page.
We hope you’ll be able to join us at Microsoft Build — in person or online — to explore how Docker and Microsoft are revolutionizing application development with innovative, secure, and AI-enhanced solutions. Whether you attend in person or watch the sessions on-demand, you’ll gain essential insights and skills to enhance your projects. Don’t miss this chance to be at the forefront of technology. We are eager to help you navigate the exciting future of AI-driven applications and look forward to exploring new horizons of technology together.
Building a foundation of structure and stability is paramount for the success of any development team, regardless of its size. It’s the key to unlocking velocity, ensuring top-notch quality, and maximizing the return on your investments in developer tools. Recognizing the pivotal role in simplifying application development, we’re taking another leap forward, announcing our partnership with the Microsoft Dev Box team to bring additional benefits to developer onboarding, environment set-up, security, and administration with Docker Desktop.
Today at Microsoft Ignite, Microsoft’s Anthony Cangialosi and Sagar Lankala shared how Microsoft Dev Box and Docker Desktop can release developers’ reliance on physical workstations and intricate, hard-to-deploy application infrastructures. This collaborative effort focuses on streamlining onboarding to new projects while bolstering security and efficiency.
Consider the positive impact:
Improved developer productivity: Before this collaboration, setting up the development environment consumed valuable developer time. Now, with Docker and Microsoft’s collaboration, the focus shifts to boosting developer efficiency and productivity and concentrating on meaningful work rather than setup and configuration tasks.
Streamlined administration: Previously, developers had to individually download Docker Desktop as a crucial part of their dev toolkit. Now, it’s possible to pre-configure and install Desktop, streamlining administrative tasks.
Security at scale: Previously, acquiring necessary assets meant developers had to navigate internal or external sources. With our solution, you can ensure the requisite images/apps are readily available, enhancing security protocols.
Together, we’re delivering a turnkey solution designed to empower individual developers, small businesses, and enterprise development teams. This initiative is poised to expedite project onboarding, facilitating quick dives into new endeavors with unparalleled ease. Join us on this journey toward enhanced efficiency, productivity, and a smoother development experience.
It’s time to build an internal developer platform (IDO) with Crossplane, Argo CD, SchemaHero, External Secrets Operator (ESO), GitHub Actions, Port, and a few others.
▬▬▬▬▬▬ Sponsoships ▬▬▬▬▬▬ If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)