Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Portainer + Talos Linux

Portainer now manages Talos Linux Kubernetes

🍿 YouTube Video

Portainer + Talos Linux

At KubeCon London, I took a workshop on using Portainer to deploy and manage Talos Linux and Kubernetes together. Portainer now manages Talos Linux, and I got to spend some time with these two tools for a one-stop shop of deploying a very unique Talos OS (from Sidero Labs, Inc.) and setting up a Kubernetes cluster with just a few clicks inside Portainer.

The Portainer Workshop: https://academy.portainer.io/yachtops/

Talos Linux: https://www.talos.dev/

Get CNDO Weekly

Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps, platform engineering, and containers, from a Docker Captain and Cloud Native Ambassador.

Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.

👀 In case you missed a newsletter, read them at bret.news


On the stream: Heroku AI and more

🔴 Heroku AI. Thursday, May 22nd, 1pm US Eastern (UTC -4)

On the stream: Heroku AI and more

Heroku just launched their AI inference hosting and MCP tooling support. Oh, and they now run on Kubernetes. We're going to demo all of it.
(click the below video and hit the "🔔 notify me" so you won't miss it)

Note that most/all of these streams should make it into edited podcast episodes if you miss the stream.

🔴 KeyCloak. Thursday, May 29th

"Open Source Identity and Access Management"

🔴 Solo.io. Thursday, June 5th

"AI gateway proxies"

🔴 Daytona. Thursday, June 12th

"Infrastructure for AI development"

🔴 Cloud Native Buildpacks. Thursday, June 26th

"Dockerfiles vs. Buildpacks"


Docker AI, Model Runner, and MCP Toolkit

Docker AI, Model Runner, and MCP Toolkit

I've been spending an increasing amount of time using Docker's various AI features lately. Here's the easy guide to each and how to get started:

Note: these tools are changing fast, and I expect them to get better over time. This is written in May 2025, and if I come back to update it, I'll indicate changes.

Three AI features launched in early 2025

Ask Gordon: Docker-Desktop-specific basic AI chat for helping with Docker tools and commands.

Docker Model Runner: Runs a set of popular, free LLMs from Docker's AI model catalog for use with your local (and eventually server) apps.

Docker MCP Toolkit: A Docker Desktop extension + MCP Server tool catalog that lets you run MCP-enabled tools for giving LLMs new abilities.

AI Feature #1: Ask Gordon AI chatbot

Docker AI, Model Runner, and MCP Toolkit
TL;DR: Gordon is a free AI to help you with Docker tasks, but if you're paying for other AI chat or IDE tools, you'll likely skip Gordon AI for what's built into those.

At its core, this is a Docker-focused ChatGPT, built into the Docker Desktop GUI and CLI. It gives results on par with OpenAI's latest models, and also adds documentation links to its answers. Think of it as a more Docker-focused, up-to-date chatbot than what you'd likely get with the big foundation models. It continues to see improvement and has several advantages over general AI chatbots. The question is, will you use it in addition to other LLMs you're already using?

Pros:

  • It can read/write files on disk and execute Docker commands, if you let it.
  • It can run from the Docker GUI or the docker ai CLI.
  • It now has MCP Toolkit access, so it can become aware of your local Docker and Kubernetes resources, and even control any MCP tools you add as Docker Desktop extensions. I plan to create a video on how this works. Stay tuned.

Cons:

  • Doesn't store history of chats or allow you to have more than one chat thread.
  • Slow. I believe it's using an OpenAI model on the backend, but it feels slower than the default ChatGPT models.
  • Doesn't answer questions outside Docker or dev stuff.

In short, it's a great free AI to help you with Docker tasks, but if you're paying for ChatGPT/Claude/Windsurf/Cursor, you'll likely skip Gordon AI for what's built into your dev tools. The MCP feature to see container resources and add abilities on-the-fly to 3rd party tools is great, if you're not using MCP in your IDE already.

A note on the name Gordon...

Why is the AI called Gordon? Well, Gordon is the name of Docker's real-life turtle mascot (with her own Twitter account), which sadly died a few years ago (great story and pics from her human family). She'll live on as Docker's AI and mascot.

Docker AI, Model Runner, and MCP Toolkit
Docker AI, Model Runner, and MCP Toolkit
Docker AI, Model Runner, and MCP Toolkit
Docker AI, Model Runner, and MCP Toolkit

Image's copyright of Docker Inc.

Get CNDO Weekly

Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps, platform engineering, and containers, from a Docker Captain and Cloud Native Ambassador.

Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.

AI Feature #2: Docker Model Runner (DMR)

Docker AI, Model Runner, and MCP Toolkit

A new docker model command that lets you pull popular LLMs and run them locally, for access from your local containers or the host OS.

Below is the 3-minute version explaining how it works:

Below is a longer version explaining the architecture, sub-commands, and how to run a ChatGPT clone locally with my example at https://bret.show/dmr-quickstart

🎧 Podcast audio-only version of DMR

Ep 180: Docker Model Runner

A podcast episode I recorded explaining DMR, when you would use it, and how it works.

AI Feature #3: MCP Toolkit

Docker AI, Model Runner, and MCP Toolkit

This is the most recent release (I don't have videos yet) and could end up being the biggest deal. It's also the most confusing if you're not fully versed in how AI has evolved in the last 7 months. A few things you need to know before digging into this feature:

Agentic AI

In 2024, the term "Agentic AI" became the way we describe an LLM not just writing text, but actually performing work on our behalf. Running commands in the shell was the first "tool" many of us saw an LLM use, and suddenly, just months later, nearly every code editor and chatbot has access to hundreds of "tools."

Model Context Protocol

Second, the MCP (Model Context Protocol) was released by Anthropic as their idea for how models (LLMs) could access tools and data. It allows a model to understand the functions that tools could perform, and, most importantly, execute those functions (or retrieve the data in a human-readable way). I hate to call MCP a standard yet, but its popularity took off so fast as the only universal way to get models and tools working together, it's the de facto standard just 6 months later. Every tool out there, including AWS's APIs, GitHub's APIs, Kubernetes, IDEs, and many CLIs, already have MCPs. Think of MCP as a proxy between the LLM and the tool/API you want it to control. It allows the LLM to interact with any external tool/API in a common way.

Docker AI, Model Runner, and MCP Toolkit

In practice, the idea of Agentic Ai and the invention of MCP means we can all-of-a-sudden, ask a chat bot to "use curl 20 times to check the reponse of docker.com, then average that reponse, and then compare that to the storage averages from my SQL table, and then store the result difference in a Notion page." The LLM, assuming it has access to MCP servers for curl, a SQL db, and Notion's API, will be able to perform that work in a series of steps based on a single prompt we typed up.

In a world where just a few months ago, we would have to write a program to do that for us, or at best, spend hours in Zapier to get the right workflow to fire... to be all replaced by a 10-second prompt we wrote on the fly, is hard to believe.

But MCP is here, and we're about to be hit by a flood of real-world use cases (and products to buy) that use LLMs + MCP to solve a near-unimaginable amount of workflow scenarios.

And I'm SOOOO here for it. Expect me to share A LOT about this huge shift in capability for devs, CI, CD, operations, SRE, platform building, and more.

Gordon AI + MCP Toolkit

How to get started using an LLM with MCP tool calling

To get all these AI things working together, you need something with access to an LLM. Gordon is an "MCP Client" that lets you chat with a model in the Docker Desktop Dashboard, so we'll use it. Then we need to give it access to the MCP tools so it can do more than just answer questions:

  1. Add the MCP Toolkit Extension to the Dashboard. This lets us run MCP-enabled tools as containers behind a MCP Server proxy that Docker Desktop runs for us.
Docker AI, Model Runner, and MCP Toolkit
  1. Open the extension and enable some of your favorite tools. Notice how you can click the little blue box to see a list of tools (capabilities) that the MCP Server supports. These are the actions you can tell an LLM to take. I'll add the GitHub one after giving it a PAT to access the GitHub API.
Docker AI, Model Runner, and MCP Toolkit
  1. Now we need to connect those MCP Servers to a "MCP Client", which is something with an LLM, like Cursor, Claude Desktop, or the easiest is to enable the Gordon AI. Gordon is now an MCP Client and comes out of the box knowing how to control Docker, but it also has other tools in its toolbox. Click the little tool hammer in the Gordon chat box and turn on "MCP Catalog", which gives it all the tools you've enabled in the MCP Toolbox extension.
Docker AI, Model Runner, and MCP Toolkit
Docker AI, Model Runner, and MCP Toolkit
  1. Now you can ask Gordon AI to do something with GitHub.

List all the open PRs for my repo bretfisher/docker-mastery-for-nodejs

Docker AI, Model Runner, and MCP Toolkit

please merge PR #356 in repo bretfisher/docker-mastery-for-nodejs

Docker AI, Model Runner, and MCP Toolkit

This reminds me of the "ChatOps" hype a decade ago, only now we have way more flexibility and reach. I can't wait to spend more time reducing toil with MCP tools.

That's it for this week!

👀 In case you missed the last newsletter

Read it here.


Docker Bake and the Travel Backlog

Docker Bake and the Travel Backlog

It's been MONTHS since the last email. Oops! Several drafts never made it out and now that I've got a backlog. You'll be seeing more of me in your inbox. Hopefully, that's a good thing.

Docker Bake and the Travel Backlog
My April of Travel Selfies

I spent most of April traveling to London for KubeCon (recap on that coming soon), then a "business retreat" with my fellow small-business owners (always enlightening and encouraging), and finally back home for the summer!

🚀 Other Big Things

I've got things in the works that aren't announced yet, such as the idea for a new AI DevOps podcast, new self-hosted courses, and lots more YouTube tutorial-style videos. I'll be breaking all that down in upcoming newsletters.

Get CNDO Weekly

Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps, AI, and containers from a Docker Captain and Cloud Native Ambassador.

Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.

👨‍🍳 Docker Bake is ready to replace Docker Build

YouTube Video:

Audio Podcast:

Ep 178: Docker Build the best way with Docker Bake

The Docker Bake Build tool went general availability, and I'm excited about what this means for creating reproducible builds and automation that can run anywhere. I break down some of the features, the benefits and walk through some examples.

🎧 Podcast Release on What's Coming in 2025

Ep 179: What's Coming in 2025?

This episode is about what I'm seeing and what I'm doing right now, and then for the rest of the year. There are three parts. First, I talk about what's about to happen for me for the next few weeks re going to London for KubeCon (since this newsletter is releasing the week of KubeCon, feel free to fast forward through this). Then what I'm planning to change in this podcast, as well as my other content on YouTube for the rest of the year. And lastly, I talk about some industry trends that I'm seeing that will force me, I think, to change the format of this show. I recorded the episode on March 22, 2025.


OCI Artifacts. The story so far

👨‍💻 OCI Registries for Everything?

OCI Artifacts. The story so far

Docker didn’t just invent the modern container; they also created the artifact store for container images they called registry. Quickly, the registry code was open-sourced as "distribution" in 2014 and eventually standardized as the OCI Distribution Specification in 2018, alongside the OCI image and runtime specifications.

GitHub - distribution/distribution: The toolkit to pack, ship, store, and deliver container content
The toolkit to pack, ship, store, and deliver container content - GitHub - distribution/distribution: The toolkit to pack, ship, store, and deliver container content
OCI Artifacts. The story so farGitHubdistribution
OCI Artifacts. The story so far

The Original Image Registry

We all know this as various casual terms like "Docker Registry", "OCI Registry", or just an "Image Registry." They tend to be "set and forget" file storage systems that are the bridge between source code, CI, and your servers that run containers.

You may not have thought much about the registry you’re using, but that's about to change (if it hasn't already for you).

OCI-compatible registries (in all their forms) are everywhere, and many organizations use multiple registries in the cloud, on-prem, and in their CI. No big deal; OCI registries are just a new type of artifact store for a new kind of artifact. Many of us already have one or more artifact stores in our organization. But now we have a N+1 problem where we're expected to run an artifact store for every package format we want to support.

Doh! We just broke a rule of mine: "Don't install a new solution similar to the old solution without a plan to phase out the old." - Says Me

Many of us have more of these "file and metadata" storage systems floating around. The core problem is that we were building and using artifact-specific storage systems. Sure, a few proprietary solutions run many different unrelated storage APIs on a single tool for all the various package and artifact types, but that's not ideal for us or the industry, and that's also not the "CNCF way." We need a single artifact storage standard flexible enough to be used by most of the tools we have today with artifact needs.

The OCI Distribution standard and our OCI registry products are likely a reasonable solution for more than just container images. Many tools are also starting to use OCI registries to store their non-image artifacts.

Helm was the tip of the iceberg

Likely, your first use of an OCI registry for something other than a container image was in storing Helm charts in an OCI registry repo. There was the old proprietary way that Helm had in v1 and v2 of sticking Helm charts in a web server, then starting in Helm v3.5 we were given a new storage option of using a registry via Helms's new OCI artifact support.

Many other tools also support this idea, including security tools storing SBOMs, image-signing, Kubernetes security profiles, OPA policies, and even non-container-related artifacts like WASM modules and Homebrew packages.

👉 How artifacts are changing the OCI registry

I rarely work with a team with only one container registry, much less a single artifact storage solution.

Well, I think we're on the cusp of the OCI (Docker) Registry becoming the one artifact and package storage system to rule them all... eventually.

As a refresher, the OCI registry you know and love is full of two main data objects: Manifests (metadata) and layer blobs (for a container image, those are gzip tarballs). We've also got tags in the API that point to a manifest (which then may point to another set of manifests if you're building multi-arch images) and then points to one or more layers. Here's a "full-featured" example of the data relationships from Brandon Mitchell's OCI Distribution 1.1 RC talk on how registries relate data objects today.

OCI Artifacts. The story so far

The beginning of the OCI artifact sprawl

Helm was one of the first teams and CLIs to use registries to store non-container image data at a large scale. Helm now has OCI registry support by default in recent versions. It took advantage of the fact that the OCI distribution standard allowed for different media types in the image layers, even if it wasn't a true container image layer. Here are the Helm docs on how to use your existing registries to store charts.

Get CNDO Weekly

Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain and Cloud Native Ambassador.

Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.

As more people wanted to put more things in their registry besides containers, we started to see conference talks and official proposals about various tools moving to support "OCI artifact" storage rather than their traditional storage.

There are two prominent use cases for why this evolution of container registries is happening. 👉 The first is for storing and connecting data related to a specific image (SBOM, CVE scan, image signing). 👉 Second, are data objects semi-related (or completely unrelated) to containers (Helm, Tekton, Homebrew). They want to take advantage of the ubiquitous and content-addressable nature of OCI registries.

First use case: Image-manifest-adjacent artifacts

From SBOMs to image signing, there's a growing list of things directly related to a specific container image manifest. Primarily led by the software supply chain security movement, many artifacts now relate to an image in a registry that need to be stored somewhere for reference. Turns out, OCI registries are good places to store those too.

That's already happened, even before the OCI Distribution 1.1 spec was released in 2024. Although it's not ideal, some tools create the relationship between the image and the adjacent artifacts using various tricks like specially crafted image tags to connect the dots. Hopefully, all those tools are moving to the new OCI Artifact spec in OCI 1.1.

Docker's own image builder provides provenance attestations adjacent to the image it builds and will push those to the registry along with the image.

But, once we have all these signatures, scan reports, and SBOMs stored in the registry, how can we (more cleanly) find them with an official API to connect all the manifest pieces?

The solution to that problem is called the Referrers API, and it was released with OCI Distribution 1.1 that was released in 2024.

This had been in the works for a while, and I'll let OCI maintainer (and Docker Captain) Brandon Mitchell explain it better from a talk about the challenge of connecting all these new artifact types to an OCI image:

Modifying the Immutable: Attaching Artifacts to OCI Images - Brandon Mitchell, BoxBoat
Modifying the Immutable: Attaching Artifacts to OCI Images - Brandon Mitchell, BoxBoat, an IBM CompanyImages are now being pushed to OCI registries with more…
OCI Artifacts. The story so farYouTube
OCI Artifacts. The story so far

2023 breakdown of the future Referrers API technicals. 44 minutes

Second use case: General artifact support

As we started deploying containers en masse, we needed other files and objects semi-related to those containers. Sure, we could always use S3 or Git, but you may not have access to those from production container clusters. Helm, Tekton, seccomp/selinux/apparmor, OPA/Gatekeeper, Flux, Wasm, Compose, and Pulumi all fall into this category, and are in various levels of supporting OCI registries as a distribution model.

Then there's the wild-wild west artifacts of "anything that needs common HTTP storage for distribution." Packaged artifacts often need the kind of guarantees that OCI registries can make, such as sha hashing everything and making data universally content-addressable. The Homebrew package manager is a great example, as it switched to using the GitHub Container Registry in 2021 to serve over 50 million packages a month. Here's an example of returning some metadata about a Homebrew package image. It's not perfect (notice it says it's an image in the mediaType) but it clearly works.

{"schemaVersion": 2,
  "manifests": [
    {
      "mediaType": "application/vnd.oci.image.manifest.v1+json",
      "digest": "sha256:3b7ebf540cd60769c993131195e796e715ff4abc37bd9a467603759264360664",
      "size": 1977,
      "platform": {
        "architecture": "amd64",
        "os": "darwin",
        "os.version": "macOS 13.0"
      },
      "annotations": {
        "org.opencontainers.image.ref.name": "3.40.1.ventura",
        "sh.brew.bottle.digest": "d3092d3c942b50278f82451449d2adc3d1dc1bd724e206ae49dd0def6eb6386d",
        "sh.brew.tab": "{\"homebrew_version\":\"3.6.16-97-ge76c55e\",\"changed_files\":[\"lib/pkgconfig/sqlite3.pc\"],\"source_modified_time\":1672237605,\"compiler\":\"clang\",\"runtime_dependencies\":[{\"full_name\":\"readline\",\"version\":\"8.2.1\",\"declared_directly\":true}],\"arch\":\"x86_64\",\"built_on\":{\"os\":\"Macintosh\",\"os_version\":\"macOS 13.0\",\"cpu_family\":\"penryn\",\"xcode\":\"14.1\",\"clt\":\"14.1.0.0.1.1666437224\",\"preferred_perl\":\"5.30\"}}"
      }
    }]}

Part of the Homebrew OCI manifest JSON for sqlite

Unlike my "first use case" above, these generic artifact types will likely not need the Referrers API to indicate a relationship to a container image directly, though they may find some internal benefit by using the Referrers API for other manifest-to-manifest relationships. That's yet to be seen.

But what these tools do need is a clear path for how to officially store their various artifact types in a registry. We didn't really have that before, and it always felt a bit hacky to overload the existing registry metadata objects of image and layers to store non-image and non-layer data.

As a workaround years ago, the ORAS project was created and eventually accepted by the CNCF. "OCI Registry As Storage" is both a CLI and a go library that lets you push/pull any data type you want into an existing OCI registry. It's quite popular and used in many other tools and cloud registries to store artifacts.

This idea of overloading the existing OCI Distribution 1.0 spec for "general artifact storage" had side effects. Mainly, support beyond container images doesn't work everywhere because some registries don't support various changes to the manifest. Also, many registry UIs don't know how to handle displaying these data types, often resulting in weird-looking image tags displayed, seeing unknown/unknown types, or the artifacts not being displayed in a UI at all.

The ORAS, CNCF, and OCI teams have a vision though. And this talk by Docker's Steve Lasker (who worked at Azure before Docker) is a great story of all the needs we have for artifacts and what they are doing about it. Note that some of the stuff about new features near the end of this 2022 talk is outdated in the implementation details of the OCI Distribution 1.1 release, but it's still good to see the examples.

Distributing Supply Chain Artifacts with OCI & ORAS Artifacts
OCI Artifacts. The story so farYouTube
OCI Artifacts. The story so far

2022 KubeCon EU overview, vision, and walkthrough of OCI artifacts. 40 minutes

In the OCI Distribution 1.1 release for the image spec and distribution spec, they added a few additional fields to the manifest to improve the drawbacks of the previous generic artifact implementation, including a artifactType that lets tool creators define their own type that should be supported by any registry that updates to OCI Distribution 1.1.

How can you take advantage of OCI artifacts now and be ready for future OCI changes?

👉 Ensure you’re ready for OCI's future

So, what should we registry users do with the changes in OCI specs? In short, nothing. All your tools should work today and will just get better with 1.1. 😅

Remember that OCI is just a specification. You fine readers don’t use OCI directly; you use a registry or CLI that adheres to the OCI Distribution specification. For years, that has held steady at a ~1.0 release. Over the years of the OCI working toward this update, there have been many meetings, working groups, and PRs. Most tools implementing OCI specs have been following those plans and working to add planned changes in preparation for a General Availability release of the specs.

Since 1.1 is a minor release, and the significant changes are all done with backward compatibility in mind, I would not expect any existing tools to break if you’re using a 1.0 tool with a 1.1 registry or vice versa.

How can I use OCI artifacts today?

Remember the “two types” of artifacts I described in last week's newsletter?

The first type, which is “image-related artifacts” (SBOMs, providence, scans, and signatures), have had an inconsistent implementation with OCI registries and will likely need an update before they will work well across all registries with proper registry-aware references (Referrers API and the artifactType metadata). These tools include Cosign, Notation, BuildKit, Trivy, Syft, Grype, etc., but the idea here is we won’t need an additional tool like the ORAS CLI to upload these artifacts in a "1.1 spec-friendly way".

I hope these tools that already support container images will get updated to package their output into an OCI Distribution 1.1 artifact and co-locate them with the image they came from for easy finding.

But the 2nd type, the “unrelated artifacts” that don’t need special registry metadata to link two artifacts together… are the ones we can use freely today! Various registries have added UI support for specific artifact types, including Docker Hub identifying a few types by name and Harbor identifying types by their brand logo.

Here is a partial list of those "2nd type" tools

In my experience, most hosted registries allow these types, but not always. YMMV.

🐳 For the pure Docker fans out there, you can…

  1. Run Wasm Modules on an existing wasi/wasm runtime without needing a full container image.
  2. docker compose publish username/my-compose-app will store Docker Compose YAML as its own OCI artifact. Then, use docker compose -f oci://docker.io/username/my-compose-app:latest up to run that compose app without needing to clone a repo.
  3. Push and pull Docker Volumes with the Docker Desktop extension or my simple shell script.

⎈ For the Kubernetes clusters and workloads, you can…

  1. Store Helm Charts in an OCI registry directly from the helm CLI.
  2. Deploy Gatekeeper, Kyverno, and Kubewarden policies to Kubernetes via OCI registries.
  3. Use the Kubernetes Security Profiles Operator (SPO) to deploy seccomp, SELinux, and AppArmor profiles to your clusters through SPO’s built-in support for OCI artifacts.
  4. For GitOps (maybe we call it ArtifactOps in this case), you can use Flux to pull your desired state from registries rather than git. Read and watch through the progress the Flux team has made so far in implementing OCI artifact support. They even have a cheatsheet.
  5. For Tekton, you can use OCI artifacts for Bundles and Chains.
  6. Control Falco rules with falcoctl on Linux and Kubernetes by distributing them through OCI artifacts.

🌍 And then there’s everything else…

  1. I’ve mentioned Homebrew as an early adopter of OCI artifacts for all your tools, serving through GitHub Container Registry at a rate of over 500 terabytes monthly. Here’s a breakdown of how their packages look in the manifest metadata.
  2. Store Dev Container Features (devcontainer.json) as OCI artifacts.
  3. Use IBMs package manager to package and deploy software to z/OS.
  4. Maybe someday, rpm's via OCI artifact.
  5. Check out the list of tools using ORAS. Here's an incomplete list of tools conforming to the Distribution spec.

😒 Swarm's Future and 😎 Docker Mastery updates

🎧 Podcast Release

Ep 177: Is Swarm at EOL? (audio podcast) (YouTube video)
😒 Swarm's Future and 😎 Docker Mastery updates

I've been a big fan of Swarm since it was launched over a decade ago and I've made multiple courses on it that people find useful. We recently got some news out of Mirantis that might be bad news. So, I talked about it on my live stream.

NOTE: Since this video was published, I've had multiple people from Docker and Mirantis reach out telling me that Swarm still has a future, so I'm getting more details and hope to make an update to this prediction above. Stay tuned!

Get CNDO Weekly

Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.

Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.

👋 New Docker Mastery Videos on ENTRYPOINT & SHELL 🤩

😒 Swarm's Future and 😎 Docker Mastery updates

We launched a new Section in Docker Mastery - my first major update in 2025.

If you're already a Docker Mastery student, jump into the new section now.

If you’d like to use my coupons to buy Docker Mastery and any of my courses, checkout bret.courses

In this new Section, I dig deep into how ENTRYPOINT works in Dockerfiles, and how it works with both CMD and SHELL statements for making custom CLI tools and advanced container startup scripts. You’ll also learn how to change shells in Docker builds with SHELL, and how Shell and Exec forms work in everything.

This section comes with a quiz, a custom cheat sheet on “Buildtime vs. Runtime” and “Overwrite vs. Additive,” tons of resource links, and two assignments. I’ve also improved the production quality, with more visuals, diagrams, and highlights to help the new knowledge stick!

Here’s an example of one of the cheat sheets you can download in the course resources.

😒 Swarm's Future and 😎 Docker Mastery updates
Describing the different types of Dockerfile statements. Buildtime, Runtime, Overwrite, and Additive.

👀 In case you missed the last newsletter

Read it here.


🥶 It's cold outside. Stay in and catch up on my releases

🔴 Thursday's live show: Udemy Course Q&A

🥶 It's cold outside. Stay in and catch up on my releases

This week is my course-focused ask-me-anything week. We'll focus on your cloud native DevOps and course questions: Containerization, orchestration, automation, infrastructure, and more.

Head to the YouTube page to click "Notify me" to get your reminder.

Get CNDO Weekly

Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.

Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.

🎧 Podcast Release

Ep 175: Aikido: Is a Single DevSecOps Tool Possible? (audio version)

In this podcast, The  Aikido security cofounders discuss implement better security in my GitHub repos, container images, and infrastructure.

Willem Delbare and Roeland Delrue discuss Aikido's security tool consolidation platform designed specifically for smaller teams and solo DevOps practitioners.
We explore how Aikido addresses the growing challenges of software supply chain security by bringing together various security tools - from CVE scanning to cloud API analysis - under a single, manageable portal. Unlike enterprise-focused solutions, Aikido targets the needs of smaller teams and individual DevOps engineers who often juggle multiple responsibilities. During the episode, they demonstrate Aikido's capabilities using my sample GitHub organization, and show how teams can implement comprehensive security measures without managing multiple separate tools.

🎧 Podcast Release

Ep 176: Best of Cloud Native 2024 (audio version)

In this latest podcast, Nirmal and I reunite for our traditional annual Holiday Special episode of breaking down the most significant developments in cloud native from 2024 and share predictions for 2025.

We touch on infrastructure evolution, exploring Kubernetes fleet management challenges and emerging solutions for simplified tool stacks, we cover essential cloud native trends of 2024, infrastructure automation breakthroughs, notable technical innovations, projects that aspire to be part of CNCF, as well as predictions for cloud native in 2025.

🎒What's In My Bag?!

For my Member Community on YouTube, I published a video that's a breakdown of my backpack and all the recording tech I cram in it for KubeCon, DockerCon, etc. You can join my YouTube Membership here https://www.youtube.com/@BretFisher/join

👀 In case you missed the last newsletter

Read it here.


Our KubeCon Takeaways and Best of DevOps 2024

😍
Thanks to today's sponsor, aikido!
Aikido's platform helps developers get security done with its superpower to remove false positives so you can find true vulnerabilities. It has new AI features that auto triage and even fix issues for you. Aikido is free for small teams or anyone wanting to simply explore. Check it out today at aikido.dev
Our KubeCon Takeaways and Best of DevOps 2024

🔴 This Week Live Show: Best of Cloud Native DevOps 2024

Our KubeCon Takeaways and Best of DevOps 2024

Click the reminder button on this page to get one Thursday.

It's our 4th annual Holiday Special: Best of DevOps & Tech with my co-host Nirmal Mehta of AWS, also a Docker Captain. Tell us what your favorite cloud native app is. Join us live to ask questions on the best and worst of 2024.

Get CNDO Weekly

Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.

Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.

📺 KubeCon Engineering Takeaways

30 minutes of us covering what we saw at KubeCon, and the trends this year.

🎧 Podcast Release: DevOps Cert Prep with AI

Ep 174: AI Cert Prep with the KodeKloud Team

In this latest podcast, Bret is joined by Mumshad Mannambeth and Vijin Palazhi of KodeKloud for Q&A on what we should be studying and certifying for in 2025.

This episode is chalked full of information. We talked about the CNCA Kubestronaut program and how GenAI has changed the cert prep game, and see what tools and techniques we should use to prepare for next year!

You've probably seen Mumshad's courses. He has been another person like myself who, for almost a decade, has been making container courses on Docker, Kubernetes, all the tooling. Now he's running a giant platform of learning and they're introducing AI into your learning and certification prep, courses, and skills labs. And we go through all of it.

We talk about all of the Linux Foundation certifications they cover. They've launched over 100 courses now on their platform and they cover a lot, if not all of the Linux certifications, especially around Kubernetes and the Cloud Native ecosystem. I'm a huge fan of that. I think this is great stuff for everyone, especially if you're early in your career and you're using certifications as a way to prove your expertise or you're like me, you've been around forever and you want to show that you're up to date. Here's a list of some of the topics we covered.

  • Community in Career Growth
  • Kubernetes Certifications: Kubestronaut
  • The Kubernetes Learning Path
  • Who is Kubestronaut For?
  • Maintaining Kubernetes Certification
  • Changes in Certification Requirements
  • KodeKloud Course Updates
  • Exploring BlueSky for Cloud Native Community
  • AI in Certification and Teaching Assistance
  • AI Tutor and Future of Learning
  • Replacing Q&A with AI?
  • Rapid Fire Q&A
  • Pro vs AI Subscription with KodeKloud
  • Starting with K8s
  • Certifications for Software Engineers
  • Course Updates and Future Plans
  • Developer Courses Plans?
  • No Labs for Azure DevOps Course
  • MLOps Courses
Be sure to check out the video version of this episode for demos.

Short on Shorts

Check out my latest Short on YouTube.

👀 In case you missed the last newsletter Read it here.


✌️Q&A on KubeCon trends & new CNCF projects this Thursday

😍
Thanks to today's sponsor, aikido!
Aikido's platform helps developers get security done with its superpower to remove false positives so you can find true vulnerabilities. It has new AI features that auto triage and even fix issues for you. Aikido is free for small teams or anyone wanting to simply explore. Check it out today at aikido.dev
✌️Q&A on KubeCon trends & new CNCF projects this Thursday

🔴 Thursday Live show: Q&A with you!

✌️Q&A on KubeCon trends & new CNCF projects this Thursday

Click the reminder button on this page to get one Thursday.

This will likely be my last Q&A for the year, and what a year it's been! I hope you'll join me to make the show lively and to get your questions answered. Topics might include KubeCon trends, latest CNCF projects, and what I'm working on next. See you on YouTube Thursday Dec 12 at 1:00pm US ET (UTC-5).

Get CNDO Weekly

Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.

Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.

🎥 Podcast Release of Inspektor Gadget Stream

Check out the edited version of our Thursday Live show about Kubernetes Security and Troubleshooting Multitool with Inspektor Gadget

The new eBPF-focused multitool, Inspektor Gadget, aims to solve some serious problems with managing Linux kernel-level tools via Kubernetes. Each security, troubleshooting, or observability utility is packaged in an OCI image and deployed to Kubernetes (and now Linux directly) via the Inspektor Gadget CLI and framework. It sounds great, so we've invited the maintainers on the show to see what it's all about and get some demos.

Show Links
https://inspektor-gadget.io/
https://inspektor-gadget.io/docs/latest/
https://github.com/inspektor-gadget/inspektor-gadget

Chris Kühl
https://hachyderm.io/deck/@blixtra
https://www.linkedin.com/in/christopherk1/

Jose Blanquicet
https://x.com/jose_blanquicet
https://www.linkedin.com/in/joseblanquicet/

🔴 Originally from YouTube Livestream 277
🎧 Audio Podcast version

👀 In case you missed the last newsletter

Read it here.


🥵 DevSecOps, KubeCon Takeaways, and Personal AI

🗓️ What's new this week

🔴 Live Show Thursday: Centralized DevSecOps Console with Aikido

🥵 DevSecOps, KubeCon Takeaways, and Personal AI

We've got another live show this Thursday (12/5). Noooo nonsense. We'll be talking about consolidation of your DevSecOps with Aikido Security. Does one central system make sense for you? Let's see. Join me, Roeland Delrue and Willem Delbare to get your questions answered.

Centralized DevSecOps Console with Aikido (Stream 281)
The Aikido cofounders join me to implement better security in my GitHub repos, Actions automations, and infrastructure.🗞️ Sign up for my weekly newsletter f…
🥵 DevSecOps, KubeCon Takeaways, and Personal AIYouTube
🥵 DevSecOps, KubeCon Takeaways, and Personal AI

😎 Vibe from the Floor of KubeCon

I'm doing quick interview videos from KubeCon... more to come.

Get This Newsletter Weekly

Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain and Cloud Native Ambassador.

Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.

🎧 Podcast Releases

Ep 173: KubeCon Engineering Takeaways

(video version coming soon)

Nirmal and I recorded this special offline episode at KubeCon North America in Salt Lake City. We hung out at the AWS booth to break down the major trends and developments from the conference. 

The event drew a record-breaking 10,000 attendees, with roughly half being first-timers to the Cloud Native ecosystem. 

Starting with Cloud Native Rejekts and moving through the pre-conference events, we noticed Platform Engineering emerged as the dominant theme, with its dedicated conference track drawing standing-room-only crowds.

The main conference showcased a notable surge in new vendors, particularly in AI and security sectors. We dissect the key engineering trends, ongoing challenges in Cloud Native adoption, and insights gathered from various conferences including ArgoCon, BackstageCon, and Wasm Day. In our 40-minute discussion, we tried to capture the essence of what made this year's KubeCon significant.

Ep 172: Personal AI with Ken Collins

(video version coming soon)

We released a podcast last week from the show we did October 24th with our friend Ken Collins. We talk about using AI for more than coding, and if we can build an AI assistant that knows us.

We touch on a lot of tools and platforms. We're bit all over the place on this one, from talking about AI features in our favorite note taking apps like Notion, to my journey of making an open AI assistant with all of my Q&A from my courses, thousands of questions and answers, to coding agents and more. We've both been trying to find value in all of these AI tools for our day-to-day work, so check it out and see what you think.

👀 In case you missed the last newsletter

Read it here.


🦋 Cloud Native is on Bluesky!

🦋 Cloud Native is on Bluesky!

It's a weird and fragmented time for social media in tech. I found my home with the cloud native and tech community on Twitter around the start of the Docker and Kubernetes projects (2014-ish), and it became my daily feed for learning and sharing with others. Many of whom I've since met at conferences and become IRL friends.

But then, over the last few years, many of us stopped being active on Twitter/X, and some left altogether. Some went to Mastodon and the ActivyPub protocol, which I still hope for, and some went to Meta's Threads. I never felt like there were enough cloud native people active on those to replace my heydays of Twitter.

I joined Bluesky a year ago, which started as a spin-off project inside Twitter. They are creating a new "AT Protocol" (often called ATProto). Like all new social media startups/projects, it was quiet at Bluesky initially, and most people I knew just had a "first post" empty profile.

But over the last few months, our Cloud Native and Kubernetes families have become quite active on Bluesky. Some are proclaiming it's the return of the cloud native crew!

Seeing the influx of tech people coming over to Bluesky reminds me that we *all* made tech twitter what it was, not the other way around.

Kelsey Hightower (@kelseyhightower.com) 2024-10-23T03:31:48.493Z

We are so back. ☸️💙

Ian Coldwater 📦💥 (@lookitup.baby) 2024-11-09T17:41:56.612Z

Today feels like a good day to delete the Twitter app from my device.

Ashley Willis-McNamara (@ashleywillis.bsky.social) 2024-11-05T15:54:32.520Z

Massive props to people like @kelseyhightower.com who give up their hard-earned large followings on Twitter to help make alternative platforms like Bluesky viable. I will always have nothing but respect for those who take a stand.

John T. Bonaccorsi (@johnbon.dev) 2024-10-23T02:47:00.413Z

Bluesky has been on a tear of organic growth this year. Huge swaths of signups have happened.

Bluesky adds 700,000 new users in a week. The majority of new users are from the US, and the app is currently the number 2 free social networking app in the US App Store www.theverge.com/2024/11/11/2...

Tom Warren (@tomwarren.co.uk) 2024-11-11T22:45:53.986Z
Bluesky is back
Twitter’s natural heir is finally open to the public — and it has some big ideas for social networking
🦋 Cloud Native is on Bluesky!PlatformerCasey Newton
🦋 Cloud Native is on Bluesky!

Bluesky now has 13M+ users, the @atproto.com developer ecosystem continues to grow, and we’ve shipped features like DMs and video! We’re excited to announce that we’ve raised a $15M Series A to continue growing the community, investing in Trust and Safety, and supporting the dev ecosystem.

Bluesky (@bsky.app) 2024-10-24T16:33:56.586Z

GitHub has recently added the butterfly to our profile options.

🦋 Cloud Native is on Bluesky!

The Bluesky Key Features

  • Web app is https://bsky.app with official apps in the iOS and Google stores.
  • Feels like early Twitter.
  • No feed algorithm by default, you see what you follow, in cronological order.
  • You can create or follow custom feeds (algos?) that others made! (this is slick)
  • Your handle can be a domain name you control.
  • We recently got new features like DMs and short video uploads. It'll take some time before it has all that were used to with other platforms, but hey, no ads or hidden feed manipulation!

Bluesky is built to make social more like the Web again, so we will never suppress links. Link to your writing, your art, your personal site — this is the social internet, designed from the ground up to be open and interoperable.

Jay 🦋 (@jay.bsky.team) 2024-11-12T05:16:58.930Z

Starter Packs (quickly following your favs)

One of the hardest parts of joining a new social platform is finding all the old accounts you used to follow. Bluesky has the fantastic feature of Starter Packs, which anyone can create. It's a list of accounts that someone can join all at once.

Here's a directory with hundreds of thousands of Starter Packs to search through.

Try it out by using these I follow to seed your feed with awesome people and projects:

Cloud Native Projects
Join the conversation
🦋 Cloud Native is on Bluesky!Bluesky Social
🦋 Cloud Native is on Bluesky!
CNCF Ambassadors
Join the conversation
🦋 Cloud Native is on Bluesky!Bluesky Social
🦋 Cloud Native is on Bluesky!
Docker Captains
Join the conversation
🦋 Cloud Native is on Bluesky!Bluesky Social
🦋 Cloud Native is on Bluesky!
Cloud Native
Join the conversation
🦋 Cloud Native is on Bluesky!Bluesky Social
🦋 Cloud Native is on Bluesky!
DevRel Starter Pack
Join the conversation
🦋 Cloud Native is on Bluesky!Bluesky Social
🦋 Cloud Native is on Bluesky!
GitHub Universe Speakers 2024
Join the conversation
🦋 Cloud Native is on Bluesky!Bluesky Social
🦋 Cloud Native is on Bluesky!
Black Women in Tech
Join the conversation
🦋 Cloud Native is on Bluesky!Bluesky Social
🦋 Cloud Native is on Bluesky!
OTeliers Extraordinaires
Join the conversation
🦋 Cloud Native is on Bluesky!Bluesky Social
🦋 Cloud Native is on Bluesky!
Prometheus
Join the conversation
🦋 Cloud Native is on Bluesky!Bluesky Social
🦋 Cloud Native is on Bluesky!
Platform Engineering Starter Pack
Join the conversation
🦋 Cloud Native is on Bluesky!Bluesky Social
🦋 Cloud Native is on Bluesky!
BlackSky Tech Pack
Join the conversation
🦋 Cloud Native is on Bluesky!Bluesky Social
🦋 Cloud Native is on Bluesky!

Custom Feeds

You don't have to stick with the default feed of "only show me people I follow." There are thousands of feeds programmed by others. You can find them on the feeds page at https://bsky.app/feeds and add them to your home page. I switch between custom feeds daily as I want to focus on certain topics/people groups.

There's a KubeCon feed!

Custom Lists

  • Feeds are a way to see posts of people you do and don't follow, with simple or complex filtering and sorting, based on the feeds algo.
  • Starter Packs are lists of accounts you can share and follow. Ideal for bulk-following.
  • Lists are something you create, but are more like simple personal feeds of just people you follow around a topic/theme.

The AT Protocol

Since you're likely a developer-type reading this, the good news is there are many things you can do with the AT Protocol and Bluesky itself to customize what you want your social to be.

GitHub - fishttp/awesome-bluesky: A list of all known tools available for the Bluesky platform
A list of all known tools available for the Bluesky platform - fishttp/awesome-bluesky
🦋 Cloud Native is on Bluesky!GitHubfishttp
🦋 Cloud Native is on Bluesky!
GitHub - notjuliet/awesome-bluesky: A list of tools and clients available for the Bluesky platform
A list of tools and clients available for the Bluesky platform - notjuliet/awesome-bluesky
🦋 Cloud Native is on Bluesky!GitHubnotjuliet
🦋 Cloud Native is on Bluesky!
GitHub - beeman/awesome-atproto: A curated list of awesome ATProto resources
A curated list of awesome ATProto resources. Contribute to beeman/awesome-atproto development by creating an account on GitHub.
🦋 Cloud Native is on Bluesky!GitHubbeeman
🦋 Cloud Native is on Bluesky!
GitHub - scrub-dev/bsky-index
Contribute to scrub-dev/bsky-index development by creating an account on GitHub.
🦋 Cloud Native is on Bluesky!GitHubscrub-dev
🦋 Cloud Native is on Bluesky!
GitHub - bluesky-social/atproto: Social networking technology created by Bluesky
Social networking technology created by Bluesky. Contribute to bluesky-social/atproto development by creating an account on GitHub.
🦋 Cloud Native is on Bluesky!GitHubbluesky-social
🦋 Cloud Native is on Bluesky!

You can help us make this better (tips)

Since we're not forced into a specific algo, you gotta do the work of following starter packs and a lot of people to give your feed that fresh feeling:

I *still* see people who only follow 150 people complain that this place isn't exciting enough. FOLLOW. MORE. PEOPLE. There's no algorithm spoon feeding you content here. You have to be a little bit active to find it. Just a little bit.

Mary Gillis (@marygillis.bsky.social) 2024-11-11T14:44:49.085Z

There are lots of guides and tools out there to help you "move" to Bluesky:

Bluesky Migrate
This page serves a simple guide on how to migrate to Bluesky from X. All steps are optional, but you should really do the first two.
🦋 Cloud Native is on Bluesky!
🦋 Cloud Native is on Bluesky!

I recently made a decision that I wasn't going to sit by and "wait for Bluesky to be more active then X." Carlos is right, it takes action to create a community:

We need to be intentional if we want Bluesky to be the new hub for our Cloud Native community. This week, let's make it happen: 👍Like ✍️Comment 💪Repost 🤙Post 👋Hashtags #KubeCon #KubeConNA #CNCF ✌️Include media. Don't forget to edit ALT 👊Follow Repost this! Find me this week and tell me you did

Carlos Santana (@santana.dev) 2024-11-09T14:34:05.793Z
See you in the blue skys!

👀 In case you missed the last newsletter

Read it here.


K8s UIs, MLOps, ECS Fargate, open source VSCode AI, and more - CNDO #67

K8s UIs, MLOps, ECS Fargate, open source VSCode AI, and more - CNDO #67

I've been heads down on course updates, video launching, and KubeCon planning (which starts this week in Salt Lake City, Utah!)

Here's a bunch of releases over the last few months, enjoy!

🗓️ New YouTube uploads

I'm planning on more of these "normal YouTube videos" in 2025 in addition to our live podcast and courses.

AWS asked me to reevaluate ECS and Lambda, so this is a short video on my thoughts.

🎙️Podcast on the state of Kubernetes UIs

DevOps and Docker Talk: Cloud Native Interviews and Tooling | State of Kubernetes UIs
Bret explores the spectrum of user interfaces and tools available for managing Kubernetes clusters as of Autumn 2024. This solo episode touches on both paid and open-source options, looking at thei…
K8s UIs, MLOps, ECS Fargate, open source VSCode AI, and more - CNDO #67DevOps and Docker Talk: Cloud Native Interviews and Tooling
K8s UIs, MLOps, ECS Fargate, open source VSCode AI, and more - CNDO #67

🚂 We've re-edited our best 2024 streams

These are video versions of our podcast, hightly edited, and includes tool demos. We pushed a bunch of these the last few months.

👀 In case you missed the last newsletter

Read it here.


Back in the Studio! - CNDO #66

🗓️ What's new this week

🔴 Live show: cloud native DevOps Q&A #276

Back in the Studio! - CNDO #66

We're back Thursday! 🕺 After some much-needed time off this summer, Nirmal and I will be live again to take your questions. We'll focus on your cloud native DevOps questions: Containerization, orchestration, automation, infrastructure, and more. We've missed you. Please join us and bring your questions. Thursday Sept 5th at 1:00 US ET (UTC-5)

Get CNDO Weekly

Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.

Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.

🎤 Podcast Releases

Despite my time off, my team released a couple of podcasts in August (they're awesome!).

🎧 Ep 167: Debug Containers with Mintoolkit

Bret is joined by DockerSlim (now mintoolkit) founder Kyle Quest, to show off how to slim down your existing images with various options.

The slimming down includes distroless images like Chainguard Images and Nix. We also look at using the new "mint debug" feature to exec into existing images and containers on Kubernetes, Docker, Podman, and containerd. Kyle joined us for a two-hour livestream to discuss mint’s evolution.

Be sure to check out the live recording of the complete show from May 30, 2024 on YouTube (Stream 268). Includes demos.

★Topics★
Mint repository in GitHub

🎧 Ep 168: Traefik 3.0: What's New?

Bret and Nirmal were joined by Emile Vauge, CTO of Traefik Labs to talk all about Traefik 3.0.

We talk about what's new in Traefik 3, 2.x to 3.0 migrations, Kubernetes Gateway API, WebAssembly (Cloud Native Wasm), HTTP3, Tailscale, OpenTelemetry, and much more!

Check out the live recording of the complete show from June 6, 2024 on YouTube (Stream 269). Includes demos.

★Topics★
Traefik Website
Traefik Labs Community Forum
Traefik's YouTube Channel
Gateway API helper CLI
ingress2gateway migration tool

👀 In case you missed the last newsletter

Read it here.


Dockerfile Frontends: The build file upgrade we need

Dockerfile Frontends: The build file upgrade we need

(Originally from my cloud native DevOps newsletter #36.)

Did you know the Dockerfile keeps getting new features in something known as Dockerfile frontends? Have you seen this first line in a Dockerfile example?

# syntax=docker/dockerfile:1
FROM something

the proper way to start a modern Dockerfile, with a frontend version

That first commented line is optional but tells the BuildKit image builder in Docker to support new features that expand on the original Dockerfile specification from a decade ago.

💡
Because BuildKit is the most feature-rich and versatile container image builder, I am finally making # syntax=docker/dockerfile:1 the first line in every Dockerfile I make and teach.

The before times

Before BuildKit and Dockerfile frontends existed, if we wanted to take advantage of some new syntax in a Dockerfile, say, when COPY --chown was added, or multi-stage, we'd have to ensure we had an updated version of Docker Engine to build Dockerfiles with that feature. That wasn't great, especially if it was updated on our local machine and "worked on my machine" but CI was building on an older Docker Engine version that failed the build. That happened to me in 2016-2020, a lot.

Since 2018, BuildKit has externalized that Dockerfile parser into a "frontend" so that neither Docker Engine nor BuildKit decided what features you could use in your Dockerfile.

The builder of 2023

In 2023, BuildKit became the default image builder in all editions of Docker, and is on by default in Docker's official GitHub Actions image builder. This is awesome because it means as long as we have Docker Engine v23+ installed, and add that first syntax line in our Dockerfiles, we will always build images with the latest Dockerfile 1.x features, regardless of our Docker Engine version or BuildKit version. Here's a short history of Docker+BuildKit, how the relate, and the various ways to use BuildKit in Docker CLI/Engine.

But why do we even need that syntax line?

If you've never put that syntax line in your Dockerfile, you may have been using newer features (like RUN --mount for ssh, cache, and secrets) and not realizing the backup behavior of BuildKit if you don't specify a syntax.

A BuildKit install (which is usually installed when you install Docker Engine or Docker Desktop) is bundled with the latest version of the Dockerfile frontend spec that was released when your Buildkit version was released. It will use that frontend spec by default when you use any docker build or docker buildx build command.

But what if your Docker Engine's BuildKit version is just a little older, and you want to use a brand new feature like ADD git@github.com/... to git clone a repo into your image without ever needing git installed? Cool, right? It was shipped in Dockerfile frontend v1.6 from June 2023.

Well, adding that feature caused a build failure for me today because I didn't put the # syntax=docker/dockerfile:1 line in my Dockerfile, so rather than downloading the latest Dockerfile frontend 1.x version from the docker/dockerfile repo on Docker Hub, BuiltKit just used its backup plan: defaulting to the frontend that was shipped with it. In my case, having BuildKit v0.11.6 installed included the older frontend v1.5, so I got a weird error that failed to build because that frontend parser didn't know how to ADD git URLs yet.

Remember to always...

# syntax=docker/dockerfile:1
FROM something
💡
As Docker recommends, I'm adding that syntax line to all my Dockerfiles, just in case they take advantage of a Dockerfile frontend feature. It futureproofs my Dockerfile and docker builds, and AFAIK doesn't have any negatives or side effects, assuming I'm always building with BuildKit...

But what about non-BuildKit builders?

Did you know there is no OCI standard for how an image is built, or the build file itself?

OCI standards are only concerned with the resulting image format, the registry API that stores it, and the runtime that creates a container. There is a lot of confusion on the web around a "Dockerfile OCI standard" that doesn't exist. Some image builders outside of the official docker build claim to be "Dockerfile compatible," but what they really mean is the pre-BuildKit Dockerfile 1.0, circa 2018. They may support a few modern features, but it's spotty and far from 100% compatible. If you were trying to make an advanced Dockerfile with all the latest Dockerfile frontend features and yet keep it "working with any image builder" you'd be disappointed.

I'm not saying another build tool isn't worth looking at, but you'll want to eventually standardize on one builder ecosystem and stick with their pros and cons.

My favorite Dockerfile frontend features

When Docker announced Dockerfile frontends back in 2018, I was unsure of its future, if it would catch on, and if I would use its features. Six years later, the BuildKit and Dockerfile maintainers are still adding valuable features to the Dockerfile syntax that I recommend and use. Here are a few:

  • v1.8 added a linter built into BuildKit, which runs on each build. A significant advantage over external linters is that this reports by default on each build and can also block builds if BuildKit thinks the Dockerfile is invalid or if you enable it to block on lint fails (which you can override.)
  • v1.7 added string substitution in variable expansion. This should reduce complexity when needing to replace strings dynamically in Dockerfile statements (e.g. replacing arm64 with aarch64 in download URLs.)
  • v1.6 added the ability to use ADD for git URLs to avoid needing git in the image.
  • v1.4 added COPY --link and ADD --link for injecting/rebasing image builds without breaking the cache of downstream Dockerfile commands. It's a niche feature I've only used a few times, but once you understand its advantages, it may significantly speed up some builds where you often need to inject a new external dependency early in the Dockerfile that won't affect the remaining steps in the build. I would describe this feature as "side-load some files into a new image without rebuilding the whole thing."
  • v1.4 added Heredocs support for long RUN chained commands to save keystrokes and make them more readable. I've not used this as much as I should.
  • v1.2 added the popular RUN --mount for injecting secrets, ssh, and caches into the building environment.

Birthday Break - CNDO #65

🗓️ What's new

Birthday Break - CNDO #65

I took a break after six years of weekly live streaming this month, and we're preparing for a return in September. I also took a break from this newsletter and had a wonderful 2-week vacation with family, something I haven't done in over a decade!

🎧 Podcast

Ep 166: Observability cost-savings and eBPF goodness with Groundcover

In this week's podcast, Bret is joined by Shahar Azulay, Groundcover CEO and Co-Founder, to discuss their new approach to fully observe K8s and its workloads with a "hybrid observability architecture."

Groundcover is a new, cloud-native, eBPF-based platform that designed a new model for how observability solutions are architected and priced. It is a product that can drastically reduce your monitoring, logging, and tracing costs and complexity, it stores all its data in your clusters and only needs one agent per host for full observability and APM.

We dig into the deployment, architecture, and how it all works under the hood.

★Topics★
Join the Groundcover Slack
Groundcover Discord Channel
Groundcover Repository in GitHub
Groundcover YouTube Channel

Be sure to check out the live recording of the complete show and all the demos from June 27, 2024 on YouTube (Stream 272).

Get CNDO Weekly

Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.

Email sent! Check your inbox to complete your signup.

No spam. Unsubscribe anytime.

💰 Sale on All Courses

I celebrated my birthday earlier this month and I thought I'd pass on some birthday celebration to you. I've put all my courses on sale at the lowest price Udemy allows. The sale lasts through Tuesday 20 Aug.

Use coupon code BIRTHDAY24 or click on the coupon links below. Sign up for another course and pass along the savings to friends and colleagues.

Kubernetes Mastery $9.99 USD

Docker Mastery $10.99 USD

Docker Mastery for Node.js $9.99 USD

Docker Swarm Mastery $9.99 USD

LLM vs RAG vs Tokens: Ollama, the local LLM manager for GenAI

Great content! Check out the full edited video podcast on YouTube.

👀 In case you missed the last newsletter

Read it here.


🎧 Podcast #166: Observability Cost-Savings and eBPF Goodness with Groundcover

🎧 Podcast #166: Observability Cost-Savings and eBPF Goodness with Groundcover

In my latest podcast, I'm joined by Shahar Azulay, Groundcover CEO and Co-Founder, to discuss their new approach to fully observe K8s and its workloads with a "hybrid observability architecture."

Groundcover is a new, cloud-native, eBPF-based platform that designed a new model for how observability solutions are architected and priced. It is a product that can drastically reduce your monitoring, logging, and tracing costs and complexity, it stores all its data in your clusters and only needs one agent per host for full observability and APM.

We dig into the deployment, architecture, and how it all works under the hood.

★Topics★
Join the Groundcover Slack
Groundcover Discord Channel
Groundcover Repository in GitHub
Groundcover YouTube Channel

You can get the show notes on the episode page.

Be sure to check out the live recording of the complete show and all the demos from June 27, 2024 on YouTube (Stream 272).

🎧 Podcast #165: Flow State with VS Code AI

🎧 Podcast #165: Flow State with VS Code AI

Bret and Nirmal are joined by Continue.dev co-founder, Nate Sesti, to walk through an open source replacement for GitHub Copilot.

You've probably heard about GitHub Copilot and other AI code assistants. The Continue team has created a completely open source solution as an alternative, or maybe a superset of these existing tools, because along with it being open source, it's also very configurable and allows you to choose multiple models to help you with code completion and chatbots in VSCode, JetBrains, and more are coming soon.

So this show builds on our recent Ollama show. Continue uses Ollama in the background to run a local LLM for you, if that's what you want to Continue to do for you, rather than internet LLM models.

You can get the show notes on the episode page.

Be sure to check out the 
live recording of the complete show from May 16, 2024 on YouTube (Ep. 266). Includes demos.

🎧 Podcast #164: AWS Graviton -The Great Arm Migration

🎧 Podcast #164: AWS Graviton -The Great Arm Migration

Bret and Nirmal are joined by Michael Fischer of AWS to discuss why we should use Graviton, their arm64 compute with AWS-designed CPUs.

Graviton is AWS' term for their custom ARM-based EC2 instances. We now have all major clouds offering an ARM-based option for their server instances, but AWS was first, way back in 2018. Fast forward 6 years and AWS is releasing their 4th generation Graviton instances, and they deliver all the CPU, networking, memory and storage performance that you'd expect from their x86 instances and beyond.

I'm a big fan of ARM-based servers and the price points that AWS gives us. They have been my default EC2 instance type for years now, and I recommend it for all projects I'm working on with companies.

We get into the history of Graviton, how easy it is to build and deploy containers and Kubernetes clusters that have Graviton and even two different platform types in the same cluster. We also cover how to build multi-platform images using Docker BuildKit.

Be sure to check out the live recording of the complete show from May 9, 2024 on YouTube (Stream 265). Includes demos.

❌