Vue normale
-
Collabnix
- Docker for Data Science: An IntroductionDocker is an open platform designed to simplify the development, shipping, and running containerised applications. By using Docker, you can isolate applications from the underlying infrastructure, which allows for faster and more efficient software delivery. Docker helps manage infrastructure similarly to how you manage applications, reducing the time between writing code and deploying it to […]
-
Elton's Blog
- Docker on Windows: Second Edition - Fully Updated for Windows Server 2019The Second Edition of my book Docker on Windows is out now. Every code sample and exercise has been fully rewritten to work on Windows Server 2019, and Windows 10 from update 1809. Get Docker on Windows: Second Edition now on Amazon If you're not into books, the source code and Dockerfiles are all available on GitHub: sixeyed/docker-on-windows, with some READMEs which are variably helpful. Or if you prefer something more interactive and hands-on, check out my Docker on Windows Workshop. Docker
Docker on Windows: Second Edition - Fully Updated for Windows Server 2019

The Second Edition of my book Docker on Windows is out now. Every code sample and exercise has been fully rewritten to work on Windows Server 2019, and Windows 10 from update 1809.
Get Docker on Windows: Second Edition now on Amazon
If you're not into books, the source code and Dockerfiles are all available on GitHub: sixeyed/docker-on-windows, with some READMEs which are variably helpful.
Or if you prefer something more interactive and hands-on, check out my Docker on Windows Workshop.
Docker Containers on Windows Server 2019
There are at least six things you can do with Docker on Windows Server 2019 that you couldn't do on Windows Server 2016. The base images are much smaller, ports publish on localhost
and volume mounts work logically.
You should be using Windows Server 2019 for Docker
(Unless you're already invested in Windows Server 2016 containers, which are still supported by Docker and Microsoft).
Windows Server 2019 is also the minimum version if you want to run Windows containers in a Kubernetes cluster.
Updated Content
The second edition of Docker on Windows takes you on the same journey as the previous edition, starting with the 101 of Windows containers, through packaging .NET Core and .NET Framework apps with Docker, to transforming monolithic apps into modern distributed architectures. And it takes in security, production readiness and CI/CD on the way.
Some new capabilities are unlocked in the latest release of Windows containers, so there's some great new content to take advantage of that:
- using Traefik as a reverse proxy to break up application front ends (chapter 5 and chapter 6)
- running Jenkins in a container to power a CI/CD pipeline where all the build, test and publish steps run in Docker containers (chapter 10)
- using config objects and secrets in Docker Swarm for app configuration (chapter 7)- understanding the secure software supply chain with Docker (chapter 9)
- instrumenting .NET apps in Windows containers with Prometheus and Grafana (chapter 11)
The last one is especially important. It helps you understand how to bring cloud-native monitoring approaches to .NET apps, with an architecture like this:
If you want to learn more about observability in modern applications, check out my Pluralsight course Monitoring Containerized Application Health with Docker
The Evolution of Windows Containers
It's great to see how much attention Windows containers are getting from Microsoft and Docker. The next big thing is running Windows containers in Kubernetes, which is supported now and available in preview in AKS.
Kubernetes is a whole different learning curve, but it will become increasingly important as more providers support Windows nodes in their Kubernetes offerings. You'll be able to capture your whole application definition in a set of Kube manifests and deploy the same app without any changes on any platform from Docker Enterprise on-prem, to AKS or any other cloud service.
To get there you need to master Docker first, and the latest edition of Docker on Windows helps get you there.
-
Elton's Blog
- Getting Started with Kubernetes on WindowsKubernetes now supports Windows machines as worker nodes. You can spin up a hybrid cluster and have Windows workloads running in Windows pods, talking to Linux workloads running in Linux pods. TL;DR - I've scripted all the setup steps to create a three-node hybrid cluster, you'll find them with instructions at sixeyed/k8s-win Now you can take older .NET Framework apps and run them in Kubernetes, which is going to help you move them to the cloud and modernize the architecture. You start by runn
Getting Started with Kubernetes on Windows

Kubernetes now supports Windows machines as worker nodes. You can spin up a hybrid cluster and have Windows workloads running in Windows pods, talking to Linux workloads running in Linux pods.
TL;DR - I've scripted all the setup steps to create a three-node hybrid cluster, you'll find them with instructions at sixeyed/k8s-win
Now you can take older .NET Framework apps and run them in Kubernetes, which is going to help you move them to the cloud and modernize the architecture. You start by running your old monolithic app in a Windows container, then you gradually break features out and run them in .NET Core on Linux containers.
Organizations have been taking that approach with Docker Swarm for a few years now. I cover it in my book Docker on Windows and in my Docker Windows Workshop. It's a very successful way to do migrations - breaking up monoliths to get the benefits of cloud-native architecture, without a full-on rewrite project.
Now you can do those migrations with Kubernetes. That opens up some interesting new patterns, and the option of running containerized Windows workloads in a managed Kubernetes service in the cloud.
Cautionary Notes
Windows support in Kubernetes is still pretty new. The feature went GA in Kubernetes 1.14, and the current release is only 1.15. There are a few things you need to be aware of:
-
cloud support is in early stages. You can spin up a hybrid Windows/Linux Kubernetes cluster in AKS, but right now it's in preview.
-
core components are in beta. Pod networking is a separate component in Kubernetes, and the main options - Calico and Flannel only have beta support for Windows nodes.
-
Windows Server 2019 is the minimum version which supports Kubernetes.
-
the developer experience is not optimal, especially if you're used to using Docker Desktop. You can run Windows containers natively on Windows 10, and even run a single-node Docker Swarm on your laptop to do stack deployments. Kubernetes needs a Linux master node, so your dev environment is going to be multiple VMs.
-
Kubernetes is complicated. It has a wider feature set than Docker Swarm but the cost of all the features is complexity. Application manifests in Kubernetes are about 4X the size of equivalent Docker Compose files, and there are way more abstractions between the entrypoint to your app and the container which ultimately does the work.
If you want to get stuck into Kubernetes on Windows, you need to bear this all in mind and be aware that you're at the front-end right now. The safer, simpler, proven alternative is Docker Swarm - but if you want to see what Kubernetes on Windows can do, now's the time to get started.
Kubernetes on Windows: Cluster Options
Kubernetes has a master-worker architecture for the cluster. The control plane runs on the master, and right now those components are Linux-only. You can't have an all-Windows Kubernetes cluster. Your infrastructure setup will be one or more Linux masters, one or more Windows workers, and one or more Linux workers:
For a development environment you can get away with one Linux master and one Windows worker, running any Linux workloads on the master, but an additional Linux worker is preferred.
You can spin up a managed Kubernetes cluster in the cloud. Azure and AWS both offer Windows nodes in preview for their Kubernetes services:
Kubernetes has a pluggable architecture for core components like networking and DNS. The cloud services take care of all that for you, but if you want to get deeper and check out the setup for yourself, you can build a local hybrid cluster with a few VMs.
Tasks for setting up a local cluster
There's already pretty good documentation on how to set up a local Kubernetes cluster with Windows nodes, but there's a lot of manual steps. This post walks through the setup using scripts which automate a much as possible. The original sources are:
- Guide for adding Windows Nodes in Kubernetes - from the Kubernetes docs
- Kubernetes on Windows - from the Microsoft docs
If you want to follow along and use my scripts you'll need to have three VMs setup. The scripts are going to install Docker and the Kubernetes components, and then:
- initialise the Kubernetes master with kubeadm
- install pod networking, using Flannel
- add the Windows worker node
- add the Linux worker node
When that's done you can administer the cluster using kubectl and deploy applications which are all-Windows, all-Linux, or a mixture.
There are still a few manual steps, but the scripts take away most of the pain.
Provision VMs
You'll want three VMs in the same virtual network. My local cluster is for development and testing, so I'm not using any firewalls and all ports are open between the VMs.
I set up the following VMs:
-
k8s-master
- which will become the master. Running Ubuntu Server 18.04 with nothing installed except the OpenSSH server; -
k8s-worker
- which will become the Linux worker. Set up in the same way as the master, with Ubuntu 18.04 and OpenSSH; -
k8s-win-worker
- which will be the Windows worker. Set up with Windows Server 2019 Core (the non-UI edition).
I'm using Parallels on the Mac for my VMs, and the IP addresses are all in the 10.211.55.* range.
The scripts assign two network address ranges for Kubernetes:
10.244.0.0/16
and10.96.0.0/12
. You'll need to use a different range for your VM network, or edit the scripts.
Initialise the Linux Master
Kubernetes installation has come far since the days of Kubernetes the Hard Way - the kubeadm
tool does most of the hard work.
On the master node you're going to install Docker and kubeadm
, along with the kubelet
and kubectl
using this setup script, running as administrator (that's sudo su
on Ubuntu):
sudo su
curl -fsSL https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/ub-1804-setup.sh | sh
If you're not familiar with the tools:
kubeadm
is used to administer cluster nodes,kubelet
is the service which connects nodes andkubectl
is for operating the cluster.
The master setup script initialises the cluster and installs the pod network using Flannel. There's a bunch of this that needs root too:
sudo su
curl -fsSL https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/ub-1804-master.sh | sh
That gives you a Kubernetes master node. The final thing is to configure kubectl
for your local user, so run this configuration script as your normal account (it will ask for your password when it does some sudo
):
curl -fsSL https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/ub-1804-config.sh | sh
The output from that script is the Kubernetes config file. Everything you need to manage the cluster is in that file - including certificates for secure communication using kubectl
.
You should copy the config block to the clipboard on your dev machine, you'll need it later to join the worker nodes.
Treat that config file carefully, it has all the connection information anyone needs to control your cluster.
You can verify your cluster nodes now with kubectl get nodes
:
elton@k8s-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 109m v1.15.1
Add a Windows Worker Node
There's a bunch of additional setup tasks you need on the Windows node. I'd recommend starting with the setup I blogged about in Getting Started with Docker on Windows Server 2019 - that tells you where to get the trial version download, and how to configure remote access and Windows Updates.
Don't follow the Docker installation steps from that post though, you'll be using scripts for that.
The rest is scripted out from the steps which are described in the Microsoft docs. There are a couple of steps because the installs need a restart.
First run the Windows setup script, which installs Docker and ends by restarting your VM:
iwr -outf win-2019-setup.ps1 https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/win-2019-setup.ps1
./win-2019-setup.ps1
When your VM restarts, connect again and copy your Kubernetes config into a file on the VM:
mkdir C:\k
notepad C:\k\config
Now you can paste in the configuration file you copied from the Linux master and save it - make sure you the filename is config
when you save it, don't let Notepad save it as config.txt
.
Windows Server Core does have some GUI functionality. Notepad and Task Manager are useful ones :)
Now you're ready to download the Kubernetes components, join the node to the cluster and start Windows Services for all the Kube pieces. That's done in the Windows worker script. You need to pass a parameter to this one, which is the IP address of your Windows VM (the machine you're running this command on - use ipconfig
to find it):
iwr -outf win-2019-worker.ps1 https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/win-2019-worker.ps1
./win-2019-worker.ps1 -ManagementIP <YOUR_WINDOWS_IP_GOES_HERE>
You'll see various "START" lines in the output there. If all goes well you should be able to run kubectl get nodes
on the master and see both nodes ready:
elton@k8s-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 5h23m v1.15.1
k8s-win-worker Ready <none> 75m v1.15.1
You can leave it there and get working, but Kubernetes doesn't let you schedule user workloads on the master by default. You can specify that it's OK to run Linux pods on the master in your application YAML files, but it's better to leave the master alone and add a second Linux node as a worker.
Add a Linux Worker Node
You're going to start in the same way as the Linux master, installing Docker and the Kubernetes components using the setup script.
SSH into the k8s-worker
node and run:
sudo su
curl -fsSL https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/ub-1804-setup.sh | sh
That gives you all the pieces, and you can use kubeadm
to join the cluster. You'll need a token for that which you can get from the join command on the master, so hop back to that SSH session on k8s-master
and run:
kubeadm token create --print-join-command
The output from that is exactly what you need to run on the Linux worker node to join the cluster. Your master IP address and token will be unique to the cluster, but the command you want is something like:
sudo kubeadm join 10.211.55.27:6443 --token 28bj3n.l91uy8dskdmxznbn --discovery-token-ca-cert-hash sha256:ff571ad198ae0...
Those tokens are short-lived (24-hour TTL), so you'll need to run the
token create
command on the master if your token expires when you add a new node
And that's it. Now you can list the nodes on the master and you'll see a functioning dev cluster:
elton@k8s-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 5h41m v1.15.1
k8s-win-worker Ready <none> 92m v1.15.1
k8s-worker Ready <none> 34s v1.15.1
You can copy out the Kubernetes config into your local
.kube
folder on your laptop, if you want to manage the cluster direct, rather than logging into the master VM
Run a Hybrid .NET App
There's a very simple ASP.NET web app I use in my Docker on Windows workshop which you can now run as a distributed app in containers on Kubernetes. There are Kube specs for that app in sixeyed/k8s-win to run SQL Server in a Linux pod and the web app on a Windows pod.
Head back to the master node, or use your laptop if you've set up the Kube config. Clone the repo to get all the YAML files:
git clone https://github.com/sixeyed/k8s-win.git
Now switch to the dwwx
directory and deploy all the spec files in the v1
folder:
git clone https://github.com/sixeyed/k8s-win.git
kubectl apply -f v1
You'll see output telling you the services and deployments have been created. The images that get used in the pod are quite big, so it will take a a few minutes to pull them. When it's done you'll see two pods running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
signup-db-6f95f88795-s5vfv 1/1 Running 0 9s
signup-web-785cccf48-8zfx2 1/1 Running 0 9s
List the services and you'll see the ports where the web application (and SQL Server) are listening:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h18m
signup-db NodePort 10.96.65.255 <none> 1433:32266/TCP 19m
signup-web NodePort 10.103.241.188 <none> 8020:31872/TCP 19m
It's the signup-web
service you're interested in - in my case the node port is 31872
. So now you can browse to the Kubernetes master node's IP address, on the service port, and the /app
endpoint and you'll see this:
It's a basic .NET demo app which has a sign-up form for a fake newsletter (currently running on .NET 4.7, but it originally started life as a .NET 2.0 app). Click on Sign Up and you can go and complete the form. The dropdowns you see are populated from reference data in the database, which means the web app - running in a Windows pod - is connected to the database - running in a Linux pod:
You can go ahead and fill in the form, and that inserts a row into the database. The SQL Server pod has a service with a node port too (32266 in my case), so you can connect a client like SqlEctron directly to the containerized database (credentials are sa
/DockerCon!!!
). You'll see the data you saved:
Next Steps
This is pretty cool. The setup is still a but funky (and my scripts come with no guarantees :), but once you have a functioning cluster you can deploy hybrid apps using the same YAMLs you'll use in other clusters.
I'll be adding more hybrid apps to the GitHub repo, so stay tuned to @EltonStoneman on Twitter.
-
Docker
- Docker Desktop 4.31: Air-Gapped Containers, Accelerated Builds, and Beta Releases of Docker Desktop on Windows on Arm, Compose File Viewer, and GitHub ActionsIn this post: Air-gapped containers: Ensuring security and compliance Accelerating Builds in Docker Desktop with Docker Build Cloud Docker Desktop on Windows on Arm (Beta) Compose File Viewer (Beta) Deep Dive into GitHub Actions Docker Builds with Docker Desktop (Beta) Docker Desktop’s latest release continues to empower development teams of every size, providing a secure hybrid development launchpad that supports productively building, sharing, and running innovative applica
Docker Desktop 4.31: Air-Gapped Containers, Accelerated Builds, and Beta Releases of Docker Desktop on Windows on Arm, Compose File Viewer, and GitHub Actions
In this post:
- Air-gapped containers: Ensuring security and compliance
- Accelerating Builds in Docker Desktop with Docker Build Cloud
- Docker Desktop on Windows on Arm (Beta)
- Compose File Viewer (Beta)
- Deep Dive into GitHub Actions Docker Builds with Docker Desktop (Beta)
Docker Desktop’s latest release continues to empower development teams of every size, providing a secure hybrid development launchpad that supports productively building, sharing, and running innovative applications anywhere.
Highlights from the Docker Desktop 4.31 release include:
- Air-gapped containers help secure developer environments and apps to ensure peace of mind.
- Accelerating Builds in Docker Desktop with Docker Build Cloud helps developers build rapidly to increase productivity and ROI.
- Docker Desktop on Windows on Arm (WoA) Beta continues our commitment to supporting the Microsoft Developer ecosystem by leveraging the newest and most advanced development environments.
- Compose File Viewer (Beta) see your Compose configuration with contextual docs.
- Deep Dive into GitHub Actions Docker Builds with Docker Desktop (Beta) that streamline accessing detailed GitHub Actions build summaries, including performance metrics and error reports, directly within the Docker Desktop UI.

Air-gapped containers: Ensuring security and compliance
For our business users, we introduce support for air-gapped containers. This feature allows admins to configure Docker Desktop to restrict containers from accessing the external network (internet) while enabling access to the internal network (private network). Docker Desktop can apply a custom set of proxy rules to network traffic from containers. The proxy can be configured to allow network connections, reject network connections, and tunnel through an HTTP or SOCKS proxy (Figure 1). This enhances security by allowing admins to choose which outgoing TCP ports the policy applies to and whether to forward a single HTTP or SOCKS proxy, or to implement policy per destination via a PAC file.

This functionality enables you to scale securely and is especially crucial for organizations with strict security requirements. Learn more about air-gapped containers on our Docker Docs.
Accelerating Builds in Docker Desktop with Docker Build Cloud
Did you know that in your Core Docker Subscription (Personal, Pro, Teams, Business) you have an included allocation of Docker Build Cloud minutes? Yes! This allocation of cloud compute time and shared cache lets you speed up your build times when you’re working with multi-container apps or large repos.
For organizations, your build minutes are shared across your team, so anyone allocated Docker Build Cloud minutes with their Docker Desktop Teams or Business subscription can leverage available minutes and purchase additional minutes if necessary. Docker Build Cloud works for both developers building locally and in CI/CD.
With Docker Desktop, you can use these minutes to accelerate your time to push and gain access to the Docker Build Cloud dashboard (build.docker.com) where you can view build history, manage users, and view your usage stats.
And now, from build.docker.com, you can quickly and easily create your team’s cloud builder using a one-click setup that connects your cloud builder to Docker Desktop. At the same time, you can choose to configure the Build Cloud builder as the default builder in Docker Desktop in about 30 seconds — check the Set the default builder radio button during the Connect via Docker Desktop setup (Figure 2).

Docker Desktop on Windows on Arm
At Microsoft Build, we were thrilled to announce that Docker Desktop is available on Windows on Arm (WoA) as a beta release. This version will be available behind authentication and is aimed at users with Arm-based Windows devices. This feature ensures that developers using these devices can take full advantage of Docker’s capabilities.
To learn more about leveraging WoA to accelerate your development practices, watch the Microsoft Build Session Introducing the Next Generation of Windows on Arm with Ivette Carreras and Jamshed Damkewala. You can also learn about the other better-together opportunities between Microsoft and Docker by visiting our Microsoft Build Docker Page and reading our event highlights blog post.
Compose File Viewer (Beta)
With Compose File Viewer (Beta), developers can now see their Docker Compose configuration file in Docker Desktop, with relevant docs linked. This makes it easier to understand your Compose YAML at a glance, with proper syntax highlighting.
Check out this new File Viewer through the View Configuration option in the Compose command line or by viewing a Compose stack in the Containers tab, then clicking the View Configuration button.
Introducing enhanced CI visibility with GitHub Actions in Docker Desktop
We’re happy to announce the beta release of our new feature for inspecting GitHub Actions builds directly in Docker Desktop. This enhancement provides in-depth summaries of Docker builds, including performance metrics, cache utilization, and detailed error reports. You can download build results as a .dockerbuild
archive and inspect them locally using Docker Desktop 4.31. Now you can access all the details about your CI build as if you had reproduced them locally.

Not familiar with the Builds View in Docker Desktop? It’s a feature we introduced last year to give you greater insight into your local Docker builds. Now, with the import functionality, you can explore the details of your remote builds from GitHub Actions just as thoroughly in a fraction of the time. This new capability aims to improve CI/CD efficiency and collaboration by offering greater visibility into your builds. Update to Docker Desktop 4.31 and configure your GitHub Actions with docker/build-push-action@v5
or docker/bake-action@v4
to get started.
Conclusion
With this latest release, we’re doubling down on our mission to support Docker Desktop users with the ability to accelerate innovation, enable security at scale, and enhance productivity.
Stay tuned for additional details and upcoming releases. Thank you for being part of our community as we continuously strive to empower development teams.
Learn more
- Authenticate and update to receive your subscription level’s newest Docker Desktop features.
- New to Docker? Create an account.
- Explore the new beta feature: GitHub Actions Docker Builds with Docker Desktop.
- Visit our Microsoft Build Docker Page to learn about our partnership in supporting Microsoft developers.
- Learn how Docker Build Cloud in Docker Desktop can accelerate builds.
- Secure Your Supply Chain with Docker Scout in Docker Desktop.
- Learn more about air-gapped containers.
- Subscribe to the Docker Newsletter.
-
Docker
- Experimental Windows Containers Support for BuildKit Released in v0.13.0We are excited to announce that the latest BuildKit release, v0.13.0, contains experimental Windows Containers support. BuildKit has been around for many years and has been the default build engine on Linux since Docker Engine 23.0.0. BuildKit is a toolkit for converting source code to build artifacts (like container images) in an efficient, expressive, and repeatable manner. BuildKit introduced the following benefits as compared with the previous Docker Builder: Parallelize building ind
Experimental Windows Containers Support for BuildKit Released in v0.13.0
We are excited to announce that the latest BuildKit release, v0.13.0, contains experimental Windows Containers support. BuildKit has been around for many years and has been the default build engine on Linux since Docker Engine 23.0.0.
BuildKit is a toolkit for converting source code to build artifacts (like container images) in an efficient, expressive, and repeatable manner. BuildKit introduced the following benefits as compared with the previous Docker Builder:
- Parallelize building independent build stages and skip any unused stages.
- Incrementally transfer only the changed files in your build context between builds, also skip the transfer of unused files in your build context.
- Use Dockerfile frontend implementations with many new features.
- Avoid side effects with the rest of the API (intermediate images and containers).
- Prioritize your build cache for automatic pruning.
Since 2018, Windows Container customers have been asking for Windows support for BuildKit, as seen in the BuildKit repo and Windows Containers repo, with hundreds of reactions and comments. We have listened to our users and focused resources in the past year to light up Windows Containers support on BuildKit.
Until now, we only shipped the Buildx client on Windows for building Linux images and some very limited Windows images using cross-compilation. Today, we are introducing experimental support for Windows Containers in BuildKit, with the aim of making this available soon in your standard Docker Build.

What’s next?
In the upcoming months, we will work toward further improvements, including:
- General Availability (GA) ready: Improving release materials, including guides and documentation.
- Integration with Docker Engine: So you can just run
docker build
. - OCI worker support: On Linux, there is an option to run BuildKit with only runc using the OCI worker. Currently, only the containerd worker is supported for Windows.
- Container driver: Add support for running in the container driver.
- Image outputs: Some image outputs supported by Linux may not work on Windows and need to be tested and assessed. These include exporting an image to multiple registries, checking if keys for image output are supported, and testing multi-platform image-building support.
- Building other artifacts: BuildKit can be used to build other artifacts beyond container images. Work needs to be done in this area to cross-check whether other artifacts, such as binaries, libraries, and documentation, are also supported on Windows as it is on Linux.
- Running buildkitd doesn’t require Admin: Currently, running buildkitd on Windows requires admin privileges. We will be looking into running buildkitd on low privileges, aka “rootless”.
- Export cache: Investigations need to be done to confirm whether specific cache exporters (inline, registry, local, gha [GitHub Actions], 3, azblob) are also supported on Windows.
- Linux parity: Identifying, accessing, and closing the feature parity gap between Windows and Linux.
Walkthrough — Build a basic “Hello World” image with BuildKit and Windows Containers
Let’s walk through the process of setting up BuildKit, including the necessary dependencies, and show how to build a basic Windows image. For feedback and issues, file a ticket at Issues · moby/buildkit (github.com) tagged with area/windows.
The platform requirements are listed below. In our scenario, we will be running a nanoserver:ltsc2022
base image with AMD64.
- Architecture: AMD64, Arm64 (binaries available but not officially tested yet).
- Supported operating systems: Windows Server 2019, Windows Server 2022, Windows 11.
- Base images:
servercore:ltsc2019
,servercore:ltsc2022
,nanoserver:ltsc2022
. See the compatibility map.
The workflow will cover the following steps:
- Enable Windows Containers.
- Install containerd.
- Install BuildKit.
- Build a simple “Hello World” image.
1. Enable Windows Containers
Start a PowerShell terminal in admin privilege mode. Run the following command to ensure the Containers feature is enabled:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V, Containers -All
If you see RestartNeeded as True on your setup, restart your machine and reopen an Administrator PowerShell terminal (Figure 1). Otherwise, continue to the next step.

2. Install containerd
Next, we need to install containerd, which is used as the container runtime for managing containers and images.
Note: We currently only support the containerd worker. In the future, we plan to add support for the OCI worker, which uses runc and will therefore remove this dependency
Run the following script to install the latest containerd release. If you have containerd already installed, skip the script below and run Start-Service containerd
to start the containerd service.
Note: containerd v1.7.7+ is required.
# If containerd previously installed run: Stop-Service containerd # Download and extract desired containerd Windows binaries $Version="1.7.13" # update to your preferred version curl.exe -L https://github.com/containerd/containerd/releases/download/v$Version/containerd-$Version-windows-amd64.tar.gz -o containerd-windows-amd64.tar.gz tar.exe xvf .\containerd-windows-amd64.tar.gz # Copy and configure Copy-Item -Path ".\bin" -Destination "$Env:ProgramFiles\containerd" -Recurse -Container:$false -Force cd $Env:ProgramFiles\containerd\ .\containerd.exe config default | Out-File config.toml -Encoding ascii # Copy Copy-Item -Path .\bin\* -Destination (New-Item -Type Directory $Env:ProgramFiles\containerd -Force) -Recurse -Force # add the binaries (containerd.exe, ctr.exe) in $env:Path $Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + [IO.Path]::PathSeparator + "$Env:ProgramFiles\containerd" [Environment]::SetEnvironmentVariable( "Path", $Path, "Machine") # reload path, so you don't have to open a new PS terminal later if needed $Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User") # configure containerd.exe config default | Out-File $Env:ProgramFiles\containerd\config.toml -Encoding ascii # Review the configuration. Depending on setup you may want to adjust: # - the sandbox_image (Kubernetes pause image) # - cni bin_dir and conf_dir locations Get-Content $Env:ProgramFiles\containerd\config.toml # Register and start service containerd.exe --register-service Start-Service containerd
3. Install BuildKit
Note: Ensure you have updated to the latest version of Docker Desktop.
Run the following script to download and extract the latest BuildKit release.
$version = "v0.13.0" # specify the release version, v0.13+ $arch = "amd64" # arm64 binary available too curl.exe -LO https://github.com/moby/buildkit/releases/download/$version/buildkit-$version.windows-$arch.tar.gz # there could be another `.\bin` directory from containerd instructions # you can move those mv bin bin2 tar.exe xvf .\buildkit-$version.windows-$arch.tar.gz ## x bin/ ## x bin/buildctl.exe ## x bin/buildkitd.exe
Next, run the following commands to add the BuildKit binaries to your Program Files
directory, then add them to the PATH
so they can be called directly.
# after the binaries are extracted in the bin directory # move them to an appropriate path in your $Env:PATH directories or: Copy-Item -Path ".\bin" -Destination "$Env:ProgramFiles\buildkit" -Recurse -Force # add `buildkitd.exe` and `buildctl.exe` binaries in the $Env:PATH $Path = [Environment]::GetEnvironmentVariable("PATH", "Machine") + ` [IO.Path]::PathSeparator + "$Env:ProgramFiles\buildkit" [Environment]::SetEnvironmentVariable( "Path", $Path, "Machine") $Env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + ` [System.Environment]::GetEnvironmentVariable("Path","User")
Run buildkitd.exe
. You should expect to see something as shown in Figure 2:

Now we can set up buildx (the BuildKit client) to use our BuildKit instance. Here we will create a Builder item that points to our Buildkit instance we just started, by running:
docker buildx create --name buildkit-exp --use --driver=remote npipe:////./pipe/buildkitd
Here we are creating a new instance of a builder and pointing it to our BuildKit instance. BuildKit will listen on npipe:////./pipe/buildkitd
.
Notice that we also name the builder, here, we call it buildkit-exp
, but you can name it whatever you want. Just remember to add --use
to set this as the current builder.
Let’s test our connection by running docker buildx inspect
(Figure 3):

All good!
You can also list and manage your builders. Run docker buildx ls
(Figure 4).

docker buildx ls
to return a list of all builders and nodes. Here we can see our new builder added to the list.4. Build “Hello World” image
We will be building a simple “hello world” image as shown in the following the Dockerfile.
FROM mcr.microsoft.com/windows/nanoserver:ltsc2022 COPY hello.txt C: CMD ["cmd", "/C", "type C:\\hello.txt"]
Run the following commands to create a directory and change directory to sample_dockerfile
.
mkdir sample_dockerfile cd sample_dockerfile
Run the following script to add the Dockerfile shown above and hello.txt
to the sample_dockerfile
directory.
Set-Content Dockerfile @" FROM mcr.microsoft.com/windows/nanoserver:ltsc2022 USER ContainerAdministrator COPY hello.txt C:/ RUN echo "Goodbye!" >> hello.txt CMD ["cmd", "/C", "type C:\\hello.txt"] "@ Set-Content hello.txt @" Hello from buildkit! This message shows that your installation appears to be working correctly. "@
Now we can use buildx
to build our image and push it to the registry (see Figure 5):
docker buildx build --builder buildkit-exp --push -t <your_username>/hello-buildkit .

If you are utilizing Docker Hub as your registry, run docker login
before running buildx build
(Figure 6).

Congratulations! You can now run containers with standard docker run
:
docker run <HUB ACCOUNT NAME>/hello-buildkit
Get started with BuildKit
We encourage you to test out the released experimental Windows BuildKit support v0.13.0. To start out, feel free to follow the documentation or blog, which will walk you through building a simple Windows image with BuildKit. File feedback and issues at Issues · moby/buildkit (github.com) tagged with area/windows.
Learn more
- Subscribe to the Docker Newsletter.
- Get the latest release of Docker Desktop.
- Vote on what’s next! Check out our public roadmap.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
Thank you
A big thanks to @gabriel-samfira, @TBBle, @tonistiigi, @AkihiroSuda, @crazy-max, @jedevc, @thaJeztah, @profnandaa, @iankingori[LX11] , and many other key community members who have contributed to enabling Windows Containers support on BuildKit. We also thank Windows Container developers who continue to provide valuable feedback and insights.
-
Docker
- Announcing Docker Desktop Support for Windows on Arm: New AI Innovation OpportunitiesDocker Desktop now supports running on Windows on Arm (WoA) devices. This exciting development was unveiled during Microsoft’s “Introducing the Next Generation of Windows on Arm” session at Microsoft Build. Docker CTO, Justin Cormack, highlighted how this strategic move will empower developers with even more rapid development capabilities, leveraging Docker Desktop on Arm-powered Windows devices. The Windows on Arm platform is redefining performance and user experience for applications. W
Announcing Docker Desktop Support for Windows on Arm: New AI Innovation Opportunities
Docker Desktop now supports running on Windows on Arm (WoA) devices. This exciting development was unveiled during Microsoft’s “Introducing the Next Generation of Windows on Arm” session at Microsoft Build. Docker CTO, Justin Cormack, highlighted how this strategic move will empower developers with even more rapid development capabilities, leveraging Docker Desktop on Arm-powered Windows devices.

The Windows on Arm platform is redefining performance and user experience for applications. With this integration, Docker Desktop extends its reach to a new wave of hardware architectures, broadening the horizons for containerized application development.

Docker Desktop support for Windows on Arm
Read on to learn why Docker Desktop support for Windows on Arm is a game changer for developers and organizations.
Broader accessibility
By supporting Arm devices, Docker Desktop becomes accessible to a wider audience, including users of popular Arm-based devices like the Microsoft devices. This inclusivity fosters a larger, more diverse Docker community, enabling more developers to harness the power of containerization on their preferred devices.
Enhanced developer experience
Developers can seamlessly work on the newest Windows on Arm devices, streamlining the development process and boosting productivity. Docker Desktop’s consistent, cross-platform experience ensures that development workflows remain smooth and efficient, regardless of the underlying hardware architecture.
Future-proofing development
As the tech industry gradually shifts toward Arm architecture for its efficiency and lower power consumption, Docker Desktop’s support for WoA devices ensures we remain at the forefront of innovation. This move future-proofs Docker Desktop, keeping it relevant and competitive as this transition accelerates.
Innovation and experimentation
With Docker Desktop on a new architecture, developers and organizations have more opportunities to innovate and experiment. Whether designing applications for traditional x64 or the emerging Arm ecosystems, Docker Desktop offers a versatile platform for creative exploration.
Market expansion
Furthering compatibility in the Windows Arm space opens new markets and opportunities for Docker, including new relationships with device manufacturers and increased adoption in sectors prioritizing energy efficiency and portability while supporting Docker’s users and customers in leveraging the dev environments that support their goals.
Accelerating developer innovation with Microsoft’s investment in WoA dev tooling
Windows on Arm is arguably as successful as it has ever been. Today, multiple Arm-powered Windows laptops and tablets are available, capable of running nearly the entire range of Windows apps thanks to x86-to-Arm code translation. While Windows on Arm still represents a small fraction of the entire Windows ecosystem, the development of native Arm apps provides a wealth of fresh opportunities for AI innovation.
Microsoft’s investments align with Docker’s strategic goals of cross-platform compatibility and user-centric development, ensuring Docker remains at the forefront of containerization technologies in a diversifying hardware landscape.
Expand your development landscape with Docker Desktop on Windows Arm devices. Update to Docker Desktop 4.31 or consider upgrading to Pro or Business subscriptions to unlock the full potential of cross-platform containerization. Embrace the future of development with Docker, where innovation, efficiency, and cross-platform compatibility drive progress.
Learn more
- Watch the Docker Breakout Session Optimizing the Microsoft developer experience with Docker to learn more about Docker and Microsoft better together opportunities.
- Authenticate and update to receive the newest Docker Desktop features per your subscription level.
- New to Docker? Create an account.
- Learn about Docker Build Cloud and how you can leverage cloud resources directly from Docker Desktop.
- Subscribe to the Docker Newsletter.
-
Docker
- Navigating Proxy Servers with Ease: New Advancements in Docker Desktop 4.30Within the ecosystem of corporate networks, proxy servers stand as guardians, orchestrating the flow of information with a watchful eye toward security. These sentinels, while adding layers to navigate, play a crucial role in safeguarding an organization’s digital boundaries and keeping its network denizens — developers and admins alike — secure from external threats. Recognizing proxy servers’ critical position, Docker Desktop 4.30 offers new enhancements, especially on the Windows front, t
Navigating Proxy Servers with Ease: New Advancements in Docker Desktop 4.30
Within the ecosystem of corporate networks, proxy servers stand as guardians, orchestrating the flow of information with a watchful eye toward security. These sentinels, while adding layers to navigate, play a crucial role in safeguarding an organization’s digital boundaries and keeping its network denizens — developers and admins alike — secure from external threats.
Recognizing proxy servers’ critical position, Docker Desktop 4.30 offers new enhancements, especially on the Windows front, to ensure seamless integration and interaction within these secured environments.

Traditional approach
The realm of proxy servers is intricate, a testament to their importance in modern corporate infrastructure. They’re not just barriers but sophisticated filters and conduits that enhance security, optimize network performance, and ensure efficient internet traffic management. In this light, the dance of authentication — while complex — is necessary to maintain this secure environment, ensuring that only verified users and applications gain access.
Traditionally, Docker Desktop approached corporate networks with a single option: basic authentication. Although functional, this approach often felt like navigating with an outdated map. It was a method that, while simple, sometimes led to moments of vulnerability and the occasional hiccup in access for those venturing into more secure or differently configured spaces within the network.
This approach could also create roadblocks for users and admins, such as:
- Repeated login prompts: A constant buzzkill.
- Security faux pas: Your credentials, base64 encoded, might as well be on a billboard.
- Access denied: Use a different authentication method? Docker Desktop is out of the loop.
- Workflow whiplash: Nothing like a login prompt to break your coding stride.
- Performance hiccups: Waiting on auth can slow down your Docker development endeavors.
Seamless interaction
Enter Docker Desktop 4.30, where the roadblocks are removed. Embracing the advanced authentication protocols of Kerberos and NTLM, Docker Desktop now ensures a more secure, seamless interaction with corporate proxies while creating a streamlined and frictionless experience.
This upgrade is designed to help you easily navigate the complexities of proxy authentication, providing a more intuitive and unobtrusive experience that both developers and admins can appreciate:
- Invisible authentication: Docker Desktop handles the proxy handshake behind the scenes.
- No more interruptions: Focus on your code, not on login prompts.
- Simplicity: No extra steps compared to basic auth.
- Performance perks: Less time waiting, more time doing.
A new workflow with Kerberos authentication scheme is shown in Figure 1:

A new workflow with NTLM auth scheme is shown in Figure 2:

Integrating Docker Desktop into environments guarded by NTLM or Kerberos proxies no longer feels like a challenge but an opportunity.
With Docker Desktop 4.30, we’re committed to facilitating this transition, prioritizing secure, efficient workflows catering to developers and admins who orchestrate these digital environments. Our focus is on bridging gaps and ensuring Docker Desktop aligns with today’s corporate networks’ security and operational standards.
FAQ
- Who benefits? Both Windows-based developers and admins.
- Continued basic auth support? Yes, providing flexibility while encouraging a shift to more secure protocols.
- How to get started? Upgrade to Docker Desktop 4.30 for Windows.
- Impact on internal networking? Absolutely none. It’s smooth sailing for container networking.
- Validity of authentication? Enjoy 10 hours of secure access with Kerberos, automatically renewed with system logins.
Docker Desktop is more than just a tool — it’s a bridge to a more streamlined, secure, and productive coding environment, respecting the intricate dance with proxy servers and ensuring that everyone involved, from developers to admins, moves in harmony with the secure protocols of their networks. Welcome to a smoother and more secure journey with Docker Desktop.
Learn more
- Update to Docker Desktop 4.30.
- Read Docker Desktop 4.30: Proxy Support with SOCKS5, NTLM and Kerberos, ECI for Build Commands, Build View Features, and Docker Desktop on RHEL Beta.
- Create a Docker Account and choose Business subscription.
- Authenticate using your corporate credentials.
- Subscribe to the Docker Newsletter.
- Vote on what’s next! Check out our public roadmap.
- New to Docker? Get started.
-
Collabnix
- Getting Started with Docker Desktop on Windows using WSL 2Docker Desktop and WSL are two popular tools for developing and running containerized applications on Windows. Docker Desktop is a Docker client that provides a graphical user interface for managing Docker containers. WSL is a Windows feature that allows you to run Linux distributions on Windows. WSL Vs WSL 2 The main difference between WSL […]
Getting Started with Docker Desktop on Windows using WSL 2
-
Elton's Blog
- Docker on Windows: Second Edition - Fully Updated for Windows Server 2019The Second Edition of my book Docker on Windows is out now. Every code sample and exercise has been fully rewritten to work on Windows Server 2019, and Windows 10 from update 1809. Get Docker on Windows: Second Edition now on Amazon If you're not into books, the source code and Dockerfiles are all available on GitHub: sixeyed/docker-on-windows, with some READMEs which are variably helpful. Or if you prefer something more interactive and hands-on, check out my Docker on Windows Workshop.
Docker on Windows: Second Edition - Fully Updated for Windows Server 2019

The Second Edition of my book Docker on Windows is out now. Every code sample and exercise has been fully rewritten to work on Windows Server 2019, and Windows 10 from update 1809.
Get Docker on Windows: Second Edition now on Amazon
If you're not into books, the source code and Dockerfiles are all available on GitHub: sixeyed/docker-on-windows, with some READMEs which are variably helpful.
Or if you prefer something more interactive and hands-on, check out my Docker on Windows Workshop.
Docker Containers on Windows Server 2019
There are at least six things you can do with Docker on Windows Server 2019 that you couldn't do on Windows Server 2016. The base images are much smaller, ports publish on localhost
and volume mounts work logically.
You should be using Windows Server 2019 for Docker
(Unless you're already invested in Windows Server 2016 containers, which are still supported by Docker and Microsoft).
Windows Server 2019 is also the minimum version if you want to run Windows containers in a Kubernetes cluster.
Updated Content
The second edition of Docker on Windows takes you on the same journey as the previous edition, starting with the 101 of Windows containers, through packaging .NET Core and .NET Framework apps with Docker, to transforming monolithic apps into modern distributed architectures. And it takes in security, production readiness and CI/CD on the way.
Some new capabilities are unlocked in the latest release of Windows containers, so there's some great new content to take advantage of that:
- using Traefik as a reverse proxy to break up application front ends (chapter 5 and chapter 6)
- running Jenkins in a container to power a CI/CD pipeline where all the build, test and publish steps run in Docker containers (chapter 10)
- using config objects and secrets in Docker Swarm for app configuration (chapter 7)- understanding the secure software supply chain with Docker (chapter 9)
- instrumenting .NET apps in Windows containers with Prometheus and Grafana (chapter 11)
The last one is especially important. It helps you understand how to bring cloud-native monitoring approaches to .NET apps, with an architecture like this:
If you want to learn more about observability in modern applications, check out my Pluralsight course Monitoring Containerized Application Health with Docker
The Evolution of Windows Containers
It's great to see how much attention Windows containers are getting from Microsoft and Docker. The next big thing is running Windows containers in Kubernetes, which is supported now and available in preview in AKS.
Kubernetes is a whole different learning curve, but it will become increasingly important as more providers support Windows nodes in their Kubernetes offerings. You'll be able to capture your whole application definition in a set of Kube manifests and deploy the same app without any changes on any platform from Docker Enterprise on-prem, to AKS or any other cloud service.
To get there you need to master Docker first, and the latest edition of Docker on Windows helps get you there.
-
Elton's Blog
- Getting Started with Kubernetes on WindowsKubernetes now supports Windows machines as worker nodes. You can spin up a hybrid cluster and have Windows workloads running in Windows pods, talking to Linux workloads running in Linux pods. TL;DR - I've scripted all the setup steps to create a three-node hybrid cluster, you'll find them with instructions at sixeyed/k8s-win Now you can take older .NET Framework apps and run them in Kubernetes, which is going to help you move them to the cloud and modernize the architecture. You start by
Getting Started with Kubernetes on Windows

Kubernetes now supports Windows machines as worker nodes. You can spin up a hybrid cluster and have Windows workloads running in Windows pods, talking to Linux workloads running in Linux pods.
TL;DR - I've scripted all the setup steps to create a three-node hybrid cluster, you'll find them with instructions at sixeyed/k8s-win
Now you can take older .NET Framework apps and run them in Kubernetes, which is going to help you move them to the cloud and modernize the architecture. You start by running your old monolithic app in a Windows container, then you gradually break features out and run them in .NET Core on Linux containers.
Organizations have been taking that approach with Docker Swarm for a few years now. I cover it in my book Docker on Windows and in my Docker Windows Workshop. It's a very successful way to do migrations - breaking up monoliths to get the benefits of cloud-native architecture, without a full-on rewrite project.
Now you can do those migrations with Kubernetes. That opens up some interesting new patterns, and the option of running containerized Windows workloads in a managed Kubernetes service in the cloud.
Cautionary Notes
Windows support in Kubernetes is still pretty new. The feature went GA in Kubernetes 1.14, and the current release is only 1.15. There are a few things you need to be aware of:
cloud support is in early stages. You can spin up a hybrid Windows/Linux Kubernetes cluster in AKS, but right now it's in preview.
core components are in beta. Pod networking is a separate component in Kubernetes, and the main options - Calico and Flannel only have beta support for Windows nodes.
Windows Server 2019 is the minimum version which supports Kubernetes.
the developer experience is not optimal, especially if you're used to using Docker Desktop. You can run Windows containers natively on Windows 10, and even run a single-node Docker Swarm on your laptop to do stack deployments. Kubernetes needs a Linux master node, so your dev environment is going to be multiple VMs.
Kubernetes is complicated. It has a wider feature set than Docker Swarm but the cost of all the features is complexity. Application manifests in Kubernetes are about 4X the size of equivalent Docker Compose files, and there are way more abstractions between the entrypoint to your app and the container which ultimately does the work.
If you want to get stuck into Kubernetes on Windows, you need to bear this all in mind and be aware that you're at the front-end right now. The safer, simpler, proven alternative is Docker Swarm - but if you want to see what Kubernetes on Windows can do, now's the time to get started.
Kubernetes on Windows: Cluster Options
Kubernetes has a master-worker architecture for the cluster. The control plane runs on the master, and right now those components are Linux-only. You can't have an all-Windows Kubernetes cluster. Your infrastructure setup will be one or more Linux masters, one or more Windows workers, and one or more Linux workers:
For a development environment you can get away with one Linux master and one Windows worker, running any Linux workloads on the master, but an additional Linux worker is preferred.
You can spin up a managed Kubernetes cluster in the cloud. Azure and AWS both offer Windows nodes in preview for their Kubernetes services:
Kubernetes has a pluggable architecture for core components like networking and DNS. The cloud services take care of all that for you, but if you want to get deeper and check out the setup for yourself, you can build a local hybrid cluster with a few VMs.
Tasks for setting up a local cluster
There's already pretty good documentation on how to set up a local Kubernetes cluster with Windows nodes, but there's a lot of manual steps. This post walks through the setup using scripts which automate a much as possible. The original sources are:
- Guide for adding Windows Nodes in Kubernetes - from the Kubernetes docs
- Kubernetes on Windows - from the Microsoft docs
If you want to follow along and use my scripts you'll need to have three VMs setup. The scripts are going to install Docker and the Kubernetes components, and then:
- initialise the Kubernetes master with kubeadm
- install pod networking, using Flannel
- add the Windows worker node
- add the Linux worker node
When that's done you can administer the cluster using kubectl and deploy applications which are all-Windows, all-Linux, or a mixture.
There are still a few manual steps, but the scripts take away most of the pain.
Provision VMs
You'll want three VMs in the same virtual network. My local cluster is for development and testing, so I'm not using any firewalls and all ports are open between the VMs.
I set up the following VMs:
k8s-master
- which will become the master. Running Ubuntu Server 18.04 with nothing installed except the OpenSSH server;k8s-worker
- which will become the Linux worker. Set up in the same way as the master, with Ubuntu 18.04 and OpenSSH;k8s-win-worker
- which will be the Windows worker. Set up with Windows Server 2019 Core (the non-UI edition).
I'm using Parallels on the Mac for my VMs, and the IP addresses are all in the 10.211.55.* range.
The scripts assign two network address ranges for Kubernetes:
10.244.0.0/16
and10.96.0.0/12
. You'll need to use a different range for your VM network, or edit the scripts.
Initialise the Linux Master
Kubernetes installation has come far since the days of Kubernetes the Hard Way - the kubeadm
tool does most of the hard work.
On the master node you're going to install Docker and kubeadm
, along with the kubelet
and kubectl
using this setup script, running as administrator (that's sudo su
on Ubuntu):
sudo su
curl -fsSL https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/ub-1804-setup.sh | sh
If you're not familiar with the tools:
kubeadm
is used to administer cluster nodes,kubelet
is the service which connects nodes andkubectl
is for operating the cluster.
The master setup script initialises the cluster and installs the pod network using Flannel. There's a bunch of this that needs root too:
sudo su
curl -fsSL https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/ub-1804-master.sh | sh
That gives you a Kubernetes master node. The final thing is to configure kubectl
for your local user, so run this configuration script as your normal account (it will ask for your password when it does some sudo
):
curl -fsSL https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/ub-1804-config.sh | sh
The output from that script is the Kubernetes config file. Everything you need to manage the cluster is in that file - including certificates for secure communication using kubectl
.
You should copy the config block to the clipboard on your dev machine, you'll need it later to join the worker nodes.
Treat that config file carefully, it has all the connection information anyone needs to control your cluster.
You can verify your cluster nodes now with kubectl get nodes
:
elton@k8s-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 109m v1.15.1
Add a Windows Worker Node
There's a bunch of additional setup tasks you need on the Windows node. I'd recommend starting with the setup I blogged about in Getting Started with Docker on Windows Server 2019 - that tells you where to get the trial version download, and how to configure remote access and Windows Updates.
Don't follow the Docker installation steps from that post though, you'll be using scripts for that.
The rest is scripted out from the steps which are described in the Microsoft docs. There are a couple of steps because the installs need a restart.
First run the Windows setup script, which installs Docker and ends by restarting your VM:
iwr -outf win-2019-setup.ps1 https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/win-2019-setup.ps1
./win-2019-setup.ps1
When your VM restarts, connect again and copy your Kubernetes config into a file on the VM:
mkdir C:\k
notepad C:\k\config
Now you can paste in the configuration file you copied from the Linux master and save it - make sure you the filename is config
when you save it, don't let Notepad save it as config.txt
.
Windows Server Core does have some GUI functionality. Notepad and Task Manager are useful ones :)
Now you're ready to download the Kubernetes components, join the node to the cluster and start Windows Services for all the Kube pieces. That's done in the Windows worker script. You need to pass a parameter to this one, which is the IP address of your Windows VM (the machine you're running this command on - use ipconfig
to find it):
iwr -outf win-2019-worker.ps1 https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/win-2019-worker.ps1
./win-2019-worker.ps1 -ManagementIP <YOUR_WINDOWS_IP_GOES_HERE>
You'll see various "START" lines in the output there. If all goes well you should be able to run kubectl get nodes
on the master and see both nodes ready:
elton@k8s-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 5h23m v1.15.1
k8s-win-worker Ready <none> 75m v1.15.1
You can leave it there and get working, but Kubernetes doesn't let you schedule user workloads on the master by default. You can specify that it's OK to run Linux pods on the master in your application YAML files, but it's better to leave the master alone and add a second Linux node as a worker.
Add a Linux Worker Node
You're going to start in the same way as the Linux master, installing Docker and the Kubernetes components using the setup script.
SSH into the k8s-worker
node and run:
sudo su
curl -fsSL https://raw.githubusercontent.com/sixeyed/k8s-win/master/setup/ub-1804-setup.sh | sh
That gives you all the pieces, and you can use kubeadm
to join the cluster. You'll need a token for that which you can get from the join command on the master, so hop back to that SSH session on k8s-master
and run:
kubeadm token create --print-join-command
The output from that is exactly what you need to run on the Linux worker node to join the cluster. Your master IP address and token will be unique to the cluster, but the command you want is something like:
sudo kubeadm join 10.211.55.27:6443 --token 28bj3n.l91uy8dskdmxznbn --discovery-token-ca-cert-hash sha256:ff571ad198ae0...
Those tokens are short-lived (24-hour TTL), so you'll need to run the
token create
command on the master if your token expires when you add a new node
And that's it. Now you can list the nodes on the master and you'll see a functioning dev cluster:
elton@k8s-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 5h41m v1.15.1
k8s-win-worker Ready <none> 92m v1.15.1
k8s-worker Ready <none> 34s v1.15.1
You can copy out the Kubernetes config into your local
.kube
folder on your laptop, if you want to manage the cluster direct, rather than logging into the master VM
Run a Hybrid .NET App
There's a very simple ASP.NET web app I use in my Docker on Windows workshop which you can now run as a distributed app in containers on Kubernetes. There are Kube specs for that app in sixeyed/k8s-win to run SQL Server in a Linux pod and the web app on a Windows pod.
Head back to the master node, or use your laptop if you've set up the Kube config. Clone the repo to get all the YAML files:
git clone https://github.com/sixeyed/k8s-win.git
Now switch to the dwwx
directory and deploy all the spec files in the v1
folder:
git clone https://github.com/sixeyed/k8s-win.git
kubectl apply -f v1
You'll see output telling you the services and deployments have been created. The images that get used in the pod are quite big, so it will take a a few minutes to pull them. When it's done you'll see two pods running:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
signup-db-6f95f88795-s5vfv 1/1 Running 0 9s
signup-web-785cccf48-8zfx2 1/1 Running 0 9s
List the services and you'll see the ports where the web application (and SQL Server) are listening:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h18m
signup-db NodePort 10.96.65.255 <none> 1433:32266/TCP 19m
signup-web NodePort 10.103.241.188 <none> 8020:31872/TCP 19m
It's the signup-web
service you're interested in - in my case the node port is 31872
. So now you can browse to the Kubernetes master node's IP address, on the service port, and the /app
endpoint and you'll see this:
It's a basic .NET demo app which has a sign-up form for a fake newsletter (currently running on .NET 4.7, but it originally started life as a .NET 2.0 app). Click on Sign Up and you can go and complete the form. The dropdowns you see are populated from reference data in the database, which means the web app - running in a Windows pod - is connected to the database - running in a Linux pod:
You can go ahead and fill in the form, and that inserts a row into the database. The SQL Server pod has a service with a node port too (32266 in my case), so you can connect a client like SqlEctron directly to the containerized database (credentials are sa
/DockerCon!!!
). You'll see the data you saved:
Next Steps
This is pretty cool. The setup is still a but funky (and my scripts come with no guarantees :), but once you have a functioning cluster you can deploy hybrid apps using the same YAMLs you'll use in other clusters.
I'll be adding more hybrid apps to the GitHub repo, so stay tuned to @EltonStoneman on Twitter.