GitHub MCP Server, Docker and Claude Desktop

GitHub Actions is a fantastic workflow engine. Combine it with multi-stage Docker builds and you have a CI process defined in a few lines of YAML, which lives inside your Git repo.
I covered this in an epsiode of my container show - ECS-C2: Continuous Deployment with Docker and GitHub on YouTube
You can use GitHub's own servers (in Azure) to run your workflows - they call them runners and they have Linux and Windows options, with a bunch of software preinstalled (including Docker). There's an allocation of free minutes with your account which means your whole CI (and CD) process can be zero cost.
The downside of using GitHub's runners is that every job starts with a fresh environment. That means no Docker build cache and no pre-pulled images (apart from these Linux base images on the Ubuntu runner and these on Windows). If your Dockerfiles are heavily optimized to use the cache, you'll suddenly lose all that benefit because every run starts with an empty cache.
You have quite a few options here. Caching Docker builds in GitHub Actions: Which approach is the fastest? 🤔 A research by Thai Pangsakulyanont gives you an excellent overview:
None of those will work if your base images are huge.
The GitHub Actions cache is only good for 5GB so that's out. Pulling from remote registries will take too long. Image layers are heavily compressed, and when Docker pulls an image it extracts the archive - so gigabytes of pulls will take network transfer time and lots of CPU time (the self-hosted runners only have 2 cores).
This blog walks through the alternative approach, using your own infrastructure to run the build - a self-hosted runner. That's your own VM which you'll reuse for every build. You can pre-pull whatever SDK and runtime images you need and they'll always be there, and you get the Docker build cache optimizations without any funky setup.
Self-hosted runners are particularly useful for Windows apps, but the approach is the same for Linux. I dug into this when I was building out a Dockerized CI process for a client, and every build was taking 45 minutes...
This is all surprisingly easy. You don't need any special ports open in your VM or a fixed IP address. The GitHub docs to create a self-hosted runner explain it all nicely, the approach is basically:
In the Settings...Actions section of your repo on GitHub you'll find the option to add a runner. GitHub supports cross-platform runners, so you can deploy to Windows or macOS on Intel, and Linux on Intel or Arm:
That's all straightforward, but you don't want a VM running 24x7 to provide a CI service you'll only use when code gets pushed, so here's the good part: you'll start and stop your VM as part of the GitHub workflow.
My self-hosted runner is an Azure VM. In Azure you only pay for the compute when your VM is running, and you can easily start and stop VMs with az
, the Azure command line:
# start the VM:
az start -g ci-resource-group -n runner-vm
# deallocate the VM - deallocation means the VM stops and we're not charged for compute:
az deallocate-g ci-resource-group -n runner-vm
It's easy enough to add those start and stop steps in your workflow. You can map dependencies so the build step won't happen until the runner has been started. So your GitHub action will have three jobs:
You'll need to create a Service Principal and save the credentials as a GitHub secret so you can log in with the Azure Login action.
The full workflow looks something like this:
name: optimized Docker build
on:
push:
paths:
- "docker/**"
- "src/**"
- ".github/workflows/build.yaml"
schedule:
- cron: "0 5 * * *"
workflow_dispatch:
jobs:
start-runner:
runs-on: ubuntu-18.04
steps:
- name: Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Start self-hosted runner
run: |
az vm start -g ci-rg -n ci-runner
build:
runs-on: [self-hosted, docker]
needs: start-runner
steps:
- uses: actions/checkout@master
- name: Build images
working-directory: docker/base
run: |
docker-compose build --pull
stop-runner:
runs-on: ubuntu-18.04
needs: build
steps:
- name: Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Deallocate self-hosted runner
run: |
az vm deallocate -g ci-rg -n ci-runner --no-wait
Here are the notable points:
an on-push trigger with path filters, so the workflow will run when a push has a change to source code, or the Docker artifacts or the workflow definition
a scheduled trigger so the build runs every day. You should definitely do this with Dockerized builds. SDK and runtime image updates could fail your build, and you want to know that ASAP
the build job won't be queued until the start-runner job has finished. It will stay queued until your runner comes online - even if it takes a minute or so for the runner daemon to start. As soon as the runner starts, the build step runs.
This build was for a Windows app that uses the graphics subsystem so it needs the full Windows Docker image. That's a big one, so the jobs were taking 45-60 minutes to run every time - no performance advantage from all my best-practice Dockerfile optimization.
With the self-hosted runner, repeat builds take 9-10 minutes. Starting the VM takes 1-2 minutes, and the build stage takes around 5 minutes. If we run 10 builds a day, we'll only be billed for 1 hour of VM compute time.
Your mileage may vary.
At this point, every developer has probably heard about GitHub Copilot. Copilot has quickly become an indispensable tool for many developers, helping novice to seasoned developers become more productive by improving overall efficiency and expediting learning.
Today, we are thrilled to announce that we are joining GitHub’s Partner Program and have shipped an experience as part of their limited public beta.
At Docker, we want to make it easy for anyone to reap the benefits of containers without all the overhead of getting started. We aim to meet developers wherever they are, whether in their favorite editor, their terminal, Docker Desktop, and now, even on GitHub.
In short, the Docker extension for GitHub Copilot (@docker) is an integration that extends GitHub Copilot’s technology to assist developers in working with Docker.
This initial scope for the Docker extension aims to take any developer end-to-end, from learning about containerization to validating and using generated Docker assets for inner loop workflows (Figure 1). Here’s a quick overview of what’s possible today:
docker-compose.yml
, and .dockerignore
files tailored to your project’s languages and file structure: “@docker How would I use Docker to containerize this project?” From there, you can quickly jump into an editor, like Codespaces, VS Code, or JetBrains IDEs, and start building your app using containers. The Docker Copilot extension currently supports Node, Python, and Java-based projects (single-language or multi-root/multi-language projects).
The Docker extension for GitHub Copilot is currently in a limited public beta and is accessible by invitation only. The Docker extension was developed through the GitHub Copilot Partner Program, which invites industry leaders to integrate their tools and services into GitHub Copilot to enrich the ecosystem and provide developers with even more powerful, context-aware tools to accelerate their projects.
Developers invited to the limited public beta can install the Docker extension on the GitHub Marketplace as an application in their organization and invoke @docker from any context where GitHub Copilot is available (for example, on GitHub or in your favorite editor).
During the limited public beta, we’ll be working on adding capabilities to help you get the most out of your Docker subscription. Look for deeper integrations that help you debug your running containers with Docker Debug, fix detected CVEs with Docker Scout, speed up your build with Docker Build Cloud, learn about Docker through our documentation, and more coming soon!
We’re excited to continue expanding on @docker during the limited public beta. We would love to hear if you’re using the Docker extension in your organization or are interested in using it once it becomes publicly available.
If you have a feature request or any issues, we invite you to file an issue on the Docker extension for GitHub Copilot tracker. Your feedback will help us shape the future of Docker tooling.
Thank you for your interest and support. We’re excited to see what you build with GitHub and @docker!
Last updated July 2023
As part of my Automate Your Deployments on Kubernetes with GitHub Actions and Argo CD GitOps course, we spend a week learning GitHub Actions (GHA) for the typical software lifecycle of build, test, and deploy.
Beyond the slides and Zoom workshops in that course, here's an organized list of resources I provide for reference and further learning.
I've never needed this, but if you need to interact with the runner shell to debug your job, this is by far the most popular Action to enable a SSH server on it.
Native SSH debugging is coming later in 2023 to GitHub Enterprise.
Once you start using reusable workflows (which I always use for most things in a software project), some specific errors and issues may crop up.
workflow_call
event, or it won't be callable.Settings > Actions > General > Access
and enable "Accessible from repositories in the organization"awesome-runners - "A curated list of awesome self-hosted GitHub Action runners in a large comparison matrix"
What do we need to do to enable everyone to self-serve themselves when infrastructure is concerned? How can we create, operate, and destroy Kubernetes clusters without the need to involve anyone else? How can we customize the experience to fulfill the needs of our users? How can we provide a good user experience through services? How can we ensure that we are following best practices? In this video, we will explore how to create an Internal Developer Platform (IDP) for Infrastructure by combining Argo CD, Crossplane, Port, and GitHub Actions.
▬▬▬▬▬▬ Additional Info
▬▬▬▬▬▬
Gist with the commands: https://gist.github.com/vfarcic/f120100b5a00167c5f2c2778082cf4a0
DevOps MUST Build Internal Developer Platform (IDP): https://youtu.be/j5i00z3QXyU
How To Create A Complete Internal Developer Platform (IDP)?: https://youtu.be/Rg98GoEHBd4
How To Shift Left Infrastructure Management Using Crossplane Compositions: https://youtu.be/AtbS1u2j7po
▬▬▬▬▬▬ Sponsoships
▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
▬▬▬▬▬▬ Livestreams & podcasts
▬▬▬▬▬▬
Podcast: https://www.devopsparadox.com/
Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ Contact me
▬▬▬▬▬▬
Follow me on Twitter: https://twitter.com/vfarcic
Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/
Most of us switched to GitOps (Argo CD, Flux, etc.) to manage applications in Kubernetes clusters. Why should we limit ourselves only to those? If GitOps is the way to go, why not use it to, for example, manage GitHub repositories, branches, files, secrets, and other GitHub “stuff”?
This video explores how to do just that with Argo CD and Crossplane.
▬▬▬▬▬▬ Additional Info
▬▬▬▬▬▬
Gist with the commands: https://gist.github.com/vfarcic/c2927af7318bbdccdec62bf6577e0840
How To Create A Complete Internal Developer Platform (IDP)?: https://youtu.be/Rg98GoEHBd4
▬▬▬▬▬▬ Sponsoships
▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
▬▬▬▬▬▬ Livestreams & podcasts
▬▬▬▬▬▬
Podcast: https://www.devopsparadox.com/
Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ Contact me
▬▬▬▬▬▬
Follow me on Twitter: https://twitter.com/vfarcic
Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/
It’s time to build an internal developer platform (IDO) with Crossplane, Argo CD, SchemaHero, External Secrets Operator (ESO), GitHub Actions, Port, and a few others.
▬▬▬▬▬▬ Additional Info
▬▬▬▬▬▬
Gist with the commands: https://gist.github.com/vfarcic/78c1d2a87baf31512b87a2254194b11c
DevOps MUST Build Internal Developer Platform (IDP): https://youtu.be/j5i00z3QXyU
How To Create A “Proper” CLI With Shell And Charm Gum: https://youtu.be/U8zCHA-9VLA
Crossplane – GitOps-based Infrastructure as Code through Kubernetes API: https://youtu.be/n8KjVmuHm7A
How To Shift Left Infrastructure Management Using Crossplane Compositions: https://youtu.be/AtbS1u2j7po
Argo CD – Applying GitOps Principles To Manage A Production Environment In Kubernetes: https://youtu.be/vpWQeoaiRM4
How To Apply GitOps To Everything – Combining Argo CD And Crossplane: https://youtu.be/yrj4lmScKHQ
SchemaHero – Database Schema Migrations Inside Kubernetes: https://youtu.be/SofQxb4CDQQ
Manage Kubernetes Secrets With External Secrets Operator (ESO): https://youtu.be/SyRZe5YVCVk
Github Actions Review And Tutorial: https://youtu.be/eZcAvTb0rbA
GitHub CLI (gh) – How to manage repositories more efficiently: https://youtu.be/BII6ZY2Rnlc
How To Build A UI For An Internal Developer Platform (IDP) With Port?: https://youtu.be/ro-h7tsp0qI
▬▬▬▬▬▬ Sponsoships
▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
▬▬▬▬▬▬ Livestreams & podcasts
▬▬▬▬▬▬
Podcast: https://www.devopsparadox.com/
Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ Contact me
▬▬▬▬▬▬
Follow me on Twitter: https://twitter.com/vfarcic
Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/
GitHub Actions is a fantastic workflow engine. Combine it with multi-stage Docker builds and you have a CI process defined in a few lines of YAML, which lives inside your Git repo.
I covered this in an epsiode of my container show - ECS-C2: Continuous Deployment with Docker and GitHub on YouTube
You can use GitHub's own servers (in Azure) to run your workflows - they call them runners and they have Linux and Windows options, with a bunch of software preinstalled (including Docker). There's an allocation of free minutes with your account which means your whole CI (and CD) process can be zero cost.
The downside of using GitHub's runners is that every job starts with a fresh environment. That means no Docker build cache and no pre-pulled images (apart from these Linux base images on the Ubuntu runner and these on Windows). If your Dockerfiles are heavily optimized to use the cache, you'll suddenly lose all that benefit because every run starts with an empty cache.
You have quite a few options here. Caching Docker builds in GitHub Actions: Which approach is the fastest? 🤔 A research by Thai Pangsakulyanont gives you an excellent overview:
None of those will work if your base images are huge.
The GitHub Actions cache is only good for 5GB so that's out. Pulling from remote registries will take too long. Image layers are heavily compressed, and when Docker pulls an image it extracts the archive - so gigabytes of pulls will take network transfer time and lots of CPU time (the self-hosted runners only have 2 cores).
This blog walks through the alternative approach, using your own infrastructure to run the build - a self-hosted runner. That's your own VM which you'll reuse for every build. You can pre-pull whatever SDK and runtime images you need and they'll always be there, and you get the Docker build cache optimizations without any funky setup.
Self-hosted runners are particularly useful for Windows apps, but the approach is the same for Linux. I dug into this when I was building out a Dockerized CI process for a client, and every build was taking 45 minutes...
This is all surprisingly easy. You don't need any special ports open in your VM or a fixed IP address. The GitHub docs to create a self-hosted runner explain it all nicely, the approach is basically:
In the Settings...Actions section of your repo on GitHub you'll find the option to add a runner. GitHub supports cross-platform runners, so you can deploy to Windows or macOS on Intel, and Linux on Intel or Arm:
That's all straightforward, but you don't want a VM running 24x7 to provide a CI service you'll only use when code gets pushed, so here's the good part: you'll start and stop your VM as part of the GitHub workflow.
My self-hosted runner is an Azure VM. In Azure you only pay for the compute when your VM is running, and you can easily start and stop VMs with az
, the Azure command line:
# start the VM:
az start -g ci-resource-group -n runner-vm
# deallocate the VM - deallocation means the VM stops and we're not charged for compute:
az deallocate-g ci-resource-group -n runner-vm
It's easy enough to add those start and stop steps in your workflow. You can map dependencies so the build step won't happen until the runner has been started. So your GitHub action will have three jobs:
You'll need to create a Service Principal and save the credentials as a GitHub secret so you can log in with the Azure Login action.
The full workflow looks something like this:
name: optimized Docker build
on:
push:
paths:
- "docker/**"
- "src/**"
- ".github/workflows/build.yaml"
schedule:
- cron: "0 5 * * *"
workflow_dispatch:
jobs:
start-runner:
runs-on: ubuntu-18.04
steps:
- name: Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Start self-hosted runner
run: |
az vm start -g ci-rg -n ci-runner
build:
runs-on: [self-hosted, docker]
needs: start-runner
steps:
- uses: actions/checkout@master
- name: Build images
working-directory: docker/base
run: |
docker-compose build --pull
stop-runner:
runs-on: ubuntu-18.04
needs: build
steps:
- name: Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Deallocate self-hosted runner
run: |
az vm deallocate -g ci-rg -n ci-runner --no-wait
Here are the notable points:
an on-push trigger with path filters, so the workflow will run when a push has a change to source code, or the Docker artifacts or the workflow definition
a scheduled trigger so the build runs every day. You should definitely do this with Dockerized builds. SDK and runtime image updates could fail your build, and you want to know that ASAP
the build job won't be queued until the start-runner job has finished. It will stay queued until your runner comes online - even if it takes a minute or so for the runner daemon to start. As soon as the runner starts, the build step runs.
This build was for a Windows app that uses the graphics subsystem so it needs the full Windows Docker image. That's a big one, so the jobs were taking 45-60 minutes to run every time - no performance advantage from all my best-practice Dockerfile optimization.
With the self-hosted runner, repeat builds take 9-10 minutes. Starting the VM takes 1-2 minutes, and the build stage takes around 5 minutes. If we run 10 builds a day, we'll only be billed for 1 hour of VM compute time.
Your mileage may vary.