At KubeCon London, I took a workshop on using Portainer to deploy and manage Talos Linux and Kubernetes together. Portainer now manages Talos Linux, and I got to spend some time with these two tools for a one-stop shop of deploying a very unique Talos OS (from Sidero Labs, Inc.) and setting up a Kubernetes cluster with just a few clicks inside Portainer.
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps, platform engineering, and containers, from a Docker Captain and Cloud Native Ambassador.
Email sent! Check your inbox to complete your signup.
We cover the continued complexity of platform engineering, DevOps for AI vs AI DevOps, Kubernetes cost optimization, HAProxy, AI Gateways, and end of life for CNCF Projects.
🔴 Heroku AI. Thursday, May 22nd, 1pm US Eastern (UTC -4)
Heroku just launched their AI inference hosting and MCP tooling support. Oh, and they now run on Kubernetes. We're going to demo all of it. (click the below video and hit the "🔔 notify me" so you won't miss it)
Note that most/all of these streams should make it into edited podcast episodes if you miss the stream.
I've been spending an increasing amount of time using Docker's various AI features lately. Here's the easy guide to each and how to get started:
Note: these tools are changing fast, and I expect them to get better over time. This is written in May 2025, and if I come back to update it, I'll indicate changes.
Three AI features launched in early 2025
Ask Gordon: Docker-Desktop-specific basic AI chat for helping with Docker tools and commands.
Docker Model Runner: Runs a set of popular, free LLMs from Docker's AI model catalog for use with your local (and eventually server) apps.
Docker MCP Toolkit: A Docker Desktop extension + MCP Server tool catalog that lets you run MCP-enabled tools for giving LLMs new abilities.
AI Feature #1: Ask Gordon AI chatbot
TL;DR: Gordon is a free AI to help you with Docker tasks, but if you're paying for other AI chat or IDE tools, you'll likely skip Gordon AI for what's built into those.
At its core, this is a Docker-focused ChatGPT, built into the Docker Desktop GUI and CLI. It gives results on par with OpenAI's latest models, and also adds documentation links to its answers. Think of it as a more Docker-focused, up-to-date chatbot than what you'd likely get with the big foundation models. It continues to see improvement and has several advantages over general AI chatbots. The question is, will you use it in addition to other LLMs you're already using?
Pros:
It can read/write files on disk and execute Docker commands, if you let it.
It can run from the Docker GUI or the docker ai CLI.
It now has MCP Toolkit access, so it can become aware of your local Docker and Kubernetes resources, and even control any MCP tools you add as Docker Desktop extensions. I plan to create a video on how this works. Stay tuned.
Cons:
Doesn't store history of chats or allow you to have more than one chat thread.
Slow. I believe it's using an OpenAI model on the backend, but it feels slower than the default ChatGPT models.
Doesn't answer questions outside Docker or dev stuff.
In short, it's a great free AI to help you with Docker tasks, but if you're paying for ChatGPT/Claude/Windsurf/Cursor, you'll likely skip Gordon AI for what's built into your dev tools. The MCP feature to see container resources and add abilities on-the-fly to 3rd party tools is great, if you're not using MCP in your IDE already.
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps, platform engineering, and containers, from a Docker Captain and Cloud Native Ambassador.
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
AI Feature #2: Docker Model Runner (DMR)
A new docker model command that lets you pull popular LLMs and run them locally, for access from your local containers or the host OS.
Below is the 3-minute version explaining how it works:
Below is a longer version explaining the architecture, sub-commands, and how to run a ChatGPT clone locally with my example at https://bret.show/dmr-quickstart
This is the most recent release (I don't have videos yet) and could end up being the biggest deal. It's also the most confusing if you're not fully versed in how AI has evolved in the last 7 months. A few things you need to know before digging into this feature:
Agentic AI
In 2024, the term "Agentic AI" became the way we describe an LLM not just writing text, but actually performing work on our behalf. Running commands in the shell was the first "tool" many of us saw an LLM use, and suddenly, just months later, nearly every code editor and chatbot has access to hundreds of "tools."
Model Context Protocol
Second, the MCP (Model Context Protocol) was released by Anthropic as their idea for how models (LLMs) could access tools and data. It allows a model to understand the functions that tools could perform, and, most importantly, execute those functions (or retrieve the data in a human-readable way). I hate to call MCP a standard yet, but its popularity took off so fast as the only universal way to get models and tools working together, it's the de facto standard just 6 months later. Every tool out there, including AWS's APIs, GitHub's APIs, Kubernetes, IDEs, and many CLIs, already have MCPs. Think of MCP as a proxy between the LLM and the tool/API you want it to control. It allows the LLM to interact with any external tool/API in a common way.
In practice, the idea of Agentic Ai and the invention of MCP means we can all-of-a-sudden, ask a chat bot to "use curl 20 times to check the reponse of docker.com, then average that reponse, and then compare that to the storage averages from my SQL table, and then store the result difference in a Notion page." The LLM, assuming it has access to MCP servers for curl, a SQL db, and Notion's API, will be able to perform that work in a series of steps based on a single prompt we typed up.
In a world where just a few months ago, we would have to write a program to do that for us, or at best, spend hours in Zapier to get the right workflow to fire... to be all replaced by a 10-second prompt we wrote on the fly, is hard to believe.
But MCP is here, and we're about to be hit by a flood of real-world use cases (and products to buy) that use LLMs + MCP to solve a near-unimaginable amount of workflow scenarios.
And I'm SOOOO here for it. Expect me to share A LOT about this huge shift in capability for devs, CI, CD, operations, SRE, platform building, and more.
Gordon AI + MCP Toolkit
How to get started using an LLM with MCP tool calling
To get all these AI things working together, you need something with access to an LLM. Gordon is an "MCP Client" that lets you chat with a model in the Docker Desktop Dashboard, so we'll use it. Then we need to give it access to the MCP tools so it can do more than just answer questions:
Add the MCP Toolkit Extension to the Dashboard. This lets us run MCP-enabled tools as containers behind a MCP Server proxy that Docker Desktop runs for us.
Open the extension and enable some of your favorite tools. Notice how you can click the little blue box to see a list of tools (capabilities) that the MCP Server supports. These are the actions you can tell an LLM to take. I'll add the GitHub one after giving it a PAT to access the GitHub API.
Now we need to connect those MCP Servers to a "MCP Client", which is something with an LLM, like Cursor, Claude Desktop, or the easiest is to enable the Gordon AI. Gordon is now an MCP Client and comes out of the box knowing how to control Docker, but it also has other tools in its toolbox. Click the little tool hammer in the Gordon chat box and turn on "MCP Catalog", which gives it all the tools you've enabled in the MCP Toolbox extension.
Now you can ask Gordon AI to do something with GitHub.
List all the open PRs for my repo bretfisher/docker-mastery-for-nodejs
please merge PR #356 in repo bretfisher/docker-mastery-for-nodejs
This reminds me of the "ChatOps" hype a decade ago, only now we have way more flexibility and reach. I can't wait to spend more time reducing toil with MCP tools.
It's been MONTHS since the last email. Oops! Several drafts never made it out and now that I've got a backlog. You'll be seeing more of me in your inbox. Hopefully, that's a good thing.
My April of Travel Selfies
I spent most of April traveling to London for KubeCon (recap on that coming soon), then a "business retreat" with my fellow small-business owners (always enlightening and encouraging), and finally back home for the summer!
🚀 Other Big Things
I've got things in the works that aren't announced yet, such as the idea for a new AI DevOps podcast, new self-hosted courses, and lots more YouTube tutorial-style videos. I'll be breaking all that down in upcoming newsletters.
Get CNDO Weekly
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps, AI, and containers from a Docker Captain and Cloud Native Ambassador.
Email sent! Check your inbox to complete your signup.
The Docker Bake Build tool went general availability, and I'm excited about what this means for creating reproducible builds and automation that can run anywhere. I break down some of the features, the benefits and walk through some examples.
This episode is about what I'm seeing and what I'm doing right now, and then for the rest of the year. There are three parts. First, I talk about what's about to happen for me for the next few weeks re going to London for KubeCon (since this newsletter is releasing the week of KubeCon, feel free to fast forward through this). Then what I'm planning to change in this podcast, as well as my other content on YouTube for the rest of the year. And lastly, I talk about some industry trends that I'm seeing that will force me, I think, to change the format of this show. I recorded the episode on March 22, 2025.
I've been a big fan of Swarm since it was launched over a decade ago and I've made multiple courses on it that people find useful. We recently got some news out of Mirantis that might be bad news. So, I talked about it on my live stream.
NOTE: Since this video was published, I've had multiple people from Docker and Mirantis reach out telling me that Swarm still has a future, so I'm getting more details and hope to make an update to this prediction above. Stay tuned!
Get CNDO Weekly
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
👋 New Docker Mastery Videos on ENTRYPOINT & SHELL 🤩
We launched a new Section in Docker Mastery - my first major update in 2025.
If you’d like to use my coupons to buy Docker Mastery and any of my courses, checkout bret.courses
In this new Section, I dig deep into how ENTRYPOINT works in Dockerfiles, and how it works with both CMD and SHELL statements for making custom CLI tools and advanced container startup scripts. You’ll also learn how to change shells in Docker builds with SHELL, and how Shell and Exec forms work in everything.
This section comes with a quiz, a custom cheat sheet on “Buildtime vs. Runtime” and “Overwrite vs. Additive,” tons of resource links, and two assignments. I’ve also improved the production quality, with more visuals, diagrams, and highlights to help the new knowledge stick!
Here’s an example of one of the cheat sheets you can download in the course resources.
Describing the different types of Dockerfile statements. Buildtime, Runtime, Overwrite, and Additive.
This week is my course-focused ask-me-anything week. We'll focus on your cloud native DevOps and course questions: Containerization, orchestration, automation, infrastructure, and more.
In this podcast, The Aikido security cofounders discuss implement better security in my GitHub repos, container images, and infrastructure.
Willem Delbare and Roeland Delrue discuss Aikido's security tool consolidation platform designed specifically for smaller teams and solo DevOps practitioners. We explore how Aikido addresses the growing challenges of software supply chain security by bringing together various security tools - from CVE scanning to cloud API analysis - under a single, manageable portal. Unlike enterprise-focused solutions, Aikido targets the needs of smaller teams and individual DevOps engineers who often juggle multiple responsibilities. During the episode, they demonstrate Aikido's capabilities using my sample GitHub organization, and show how teams can implement comprehensive security measures without managing multiple separate tools.
In this latest podcast, Nirmal and I reunite for our traditional annual Holiday Special episode of breaking down the most significant developments in cloud native from 2024 and share predictions for 2025.
We touch on infrastructure evolution, exploring Kubernetes fleet management challenges and emerging solutions for simplified tool stacks, we cover essential cloud native trends of 2024, infrastructure automation breakthroughs, notable technical innovations, projects that aspire to be part of CNCF, as well as predictions for cloud native in 2025.
🎒What's In My Bag?!
For my Member Community on YouTube, I published a video that's a breakdown of my backpack and all the recording tech I cram in it for KubeCon, DockerCon, etc. You can join my YouTube Membership here https://www.youtube.com/@BretFisher/join
Thanks to today's sponsor, aikido! Aikido's platform helps developers get security done with its superpower to remove false positives so you can find true vulnerabilities. It has new AI features that auto triage and even fix issues for you. Aikido is free for small teams or anyone wanting to simply explore. Check it out today at aikido.dev
🔴 This Week Live Show: Best of Cloud Native DevOps 2024
It's our 4th annual Holiday Special: Best of DevOps & Tech with my co-host Nirmal Mehta of AWS, also a Docker Captain. Tell us what your favorite cloud native app is. Join us live to ask questions on the best and worst of 2024.
Get CNDO Weekly
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.
Email sent! Check your inbox to complete your signup.
In this latest podcast, Bret is joined by Mumshad Mannambeth and Vijin Palazhi of KodeKloud for Q&A on what we should be studying and certifying for in 2025.
This episode is chalked full of information. We talked about the CNCA Kubestronaut program and how GenAI has changed the cert prep game, and see what tools and techniques we should use to prepare for next year!
You've probably seen Mumshad's courses. He has been another person like myself who, for almost a decade, has been making container courses on Docker, Kubernetes, all the tooling. Now he's running a giant platform of learning and they're introducing AI into your learning and certification prep, courses, and skills labs. And we go through all of it.
We talk about all of the Linux Foundation certifications they cover. They've launched over 100 courses now on their platform and they cover a lot, if not all of the Linux certifications, especially around Kubernetes and the Cloud Native ecosystem. I'm a huge fan of that. I think this is great stuff for everyone, especially if you're early in your career and you're using certifications as a way to prove your expertise or you're like me, you've been around forever and you want to show that you're up to date. Here's a list of some of the topics we covered.
Thanks to today's sponsor, aikido! Aikido's platform helps developers get security done with its superpower to remove false positives so you can find true vulnerabilities. It has new AI features that auto triage and even fix issues for you. Aikido is free for small teams or anyone wanting to simply explore. Check it out today at aikido.dev
This will likely be my last Q&A for the year, and what a year it's been! I hope you'll join me to make the show lively and to get your questions answered. Topics might include KubeCon trends, latest CNCF projects, and what I'm working on next. See you on YouTube Thursday Dec 12 at 1:00pm US ET (UTC-5).
Get CNDO Weekly
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.
Email sent! Check your inbox to complete your signup.
The new eBPF-focused multitool, Inspektor Gadget, aims to solve some serious problems with managing Linux kernel-level tools via Kubernetes. Each security, troubleshooting, or observability utility is packaged in an OCI image and deployed to Kubernetes (and now Linux directly) via the Inspektor Gadget CLI and framework. It sounds great, so we've invited the maintainers on the show to see what it's all about and get some demos.
We've got another live show this Thursday (12/5). Noooo nonsense. We'll be talking about consolidation of your DevSecOps with Aikido Security. Does one central system make sense for you? Let's see. Join me, Roeland Delrue and Willem Delbare to get your questions answered.
I'm doing quick interview videos from KubeCon... more to come.
Get This Newsletter Weekly
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain and Cloud Native Ambassador.
Email sent! Check your inbox to complete your signup.
Nirmal and I recorded this special offline episode at KubeCon North America in Salt Lake City. We hung out at the AWS booth to break down the major trends and developments from the conference.
The event drew a record-breaking 10,000 attendees, with roughly half being first-timers to the Cloud Native ecosystem.
Starting with Cloud Native Rejekts and moving through the pre-conference events, we noticed Platform Engineering emerged as the dominant theme, with its dedicated conference track drawing standing-room-only crowds.
The main conference showcased a notable surge in new vendors, particularly in AI and security sectors. We dissect the key engineering trends, ongoing challenges in Cloud Native adoption, and insights gathered from various conferences including ArgoCon, BackstageCon, and Wasm Day. In our 40-minute discussion, we tried to capture the essence of what made this year's KubeCon significant.
We released a podcast last week from the show we did October 24th with our friend Ken Collins. We talk about using AI for more than coding, and if we can build an AI assistant that knows us.
We touch on a lot of tools and platforms. We're bit all over the place on this one, from talking about AI features in our favorite note taking apps like Notion, to my journey of making an open AI assistant with all of my Q&A from my courses, thousands of questions and answers, to coding agents and more. We've both been trying to find value in all of these AI tools for our day-to-day work, so check it out and see what you think.
It's a weird and fragmented time for social media in tech. I found my home with the cloud native and tech community on Twitter around the start of the Docker and Kubernetes projects (2014-ish), and it became my daily feed for learning and sharing with others. Many of whom I've since met at conferences and become IRL friends.
But then, over the last few years, many of us stopped being active on Twitter/X, and some left altogether. Some went to Mastodon and the ActivyPub protocol, which I still hope for, and some went to Meta's Threads. I never felt like there were enough cloud native people active on those to replace my heydays of Twitter.
I joined Bluesky a year ago, which started as a spin-off project inside Twitter. They are creating a new "AT Protocol" (often called ATProto). Like all new social media startups/projects, it was quiet at Bluesky initially, and most people I knew just had a "first post" empty profile.
But over the last few months, our Cloud Native and Kubernetes families have become quite active on Bluesky. Some are proclaiming it's the return of the cloud native crew!
Seeing the influx of tech people coming over to Bluesky reminds me that we *all* made tech twitter what it was, not the other way around.
Massive props to people like @kelseyhightower.com who give up their hard-earned large followings on Twitter to help make alternative platforms like Bluesky viable. I will always have nothing but respect for those who take a stand.
Bluesky has been on a tear of organic growth this year. Huge swaths of signups have happened.
Bluesky adds 700,000 new users in a week. The majority of new users are from the US, and the app is currently the number 2 free social networking app in the US App Store www.theverge.com/2024/11/11/2...
Bluesky now has 13M+ users, the @atproto.com developer ecosystem continues to grow, and we’ve shipped features like DMs and video!
We’re excited to announce that we’ve raised a $15M Series A to continue growing the community, investing in Trust and Safety, and supporting the dev ecosystem.
GitHub has recently added the butterfly to our profile options.
The Bluesky Key Features
Web app is https://bsky.app with official apps in the iOS and Google stores.
Feels like early Twitter.
No feed algorithm by default, you see what you follow, in cronological order.
You can create or follow custom feeds (algos?) that others made! (this is slick)
Your handle can be a domain name you control.
We recently got new features like DMs and short video uploads. It'll take some time before it has all that were used to with other platforms, but hey, no ads or hidden feed manipulation!
Bluesky is built to make social more like the Web again, so we will never suppress links. Link to your writing, your art, your personal site — this is the social internet, designed from the ground up to be open and interoperable.
One of the hardest parts of joining a new social platform is finding all the old accounts you used to follow. Bluesky has the fantastic feature of Starter Packs, which anyone can create. It's a list of accounts that someone can join all at once.
Here's a directory with hundreds of thousands of Starter Packs to search through.
Try it out by using these I follow to seed your feed with awesome people and projects:
You don't have to stick with the default feed of "only show me people I follow." There are thousands of feeds programmed by others. You can find them on the feeds page at https://bsky.app/feeds and add them to your home page. I switch between custom feeds daily as I want to focus on certain topics/people groups.
Feeds are a way to see posts of people you do and don't follow, with simple or complex filtering and sorting, based on the feeds algo.
Starter Packs are lists of accounts you can share and follow. Ideal for bulk-following.
Lists are something you create, but are more like simple personal feeds of just people you follow around a topic/theme.
The AT Protocol
Since you're likely a developer-type reading this, the good news is there are many things you can do with the AT Protocol and Bluesky itself to customize what you want your social to be.
Since we're not forced into a specific algo, you gotta do the work of following starter packs and a lot of people to give your feed that fresh feeling:
I *still* see people who only follow 150 people complain that this place isn't exciting enough.
FOLLOW.
MORE.
PEOPLE.
There's no algorithm spoon feeding you content here. You have to be a little bit active to find it. Just a little bit.
I recently made a decision that I wasn't going to sit by and "wait for Bluesky to be more active then X." Carlos is right, it takes action to create a community:
We need to be intentional if we want Bluesky to be the new hub for our Cloud Native community.
This week, let's make it happen:
👍Like
✍️Comment
💪Repost
🤙Post
👋Hashtags #KubeCon #KubeConNA #CNCF
✌️Include media. Don't forget to edit ALT
👊Follow
Repost this! Find me this week and tell me you did
We're back Thursday! 🕺 After some much-needed time off this summer, Nirmal and I will be live again to take your questions. We'll focus on your cloud native DevOps questions: Containerization, orchestration, automation, infrastructure, and more. We've missed you. Please join us and bring your questions. Thursday Sept 5th at 1:00 US ET (UTC-5)
Get CNDO Weekly
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
🎤 Podcast Releases
Despite my time off, my team released a couple of podcasts in August (they're awesome!).
Bret is joined by DockerSlim (now mintoolkit) founder Kyle Quest, to show off how to slim down your existing images with various options.
The slimming down includes distroless images like Chainguard Images and Nix. We also look at using the new "mint debug" feature to exec into existing images and containers on Kubernetes, Docker, Podman, and containerd. Kyle joined us for a two-hour livestream to discuss mint’s evolution.
Be sure to check out the live recording of the complete show from May 30, 2024 on YouTube (Stream 268). Includes demos.
Bret and Nirmal were joined by Emile Vauge, CTO of Traefik Labs to talk all about Traefik 3.0.
We talk about what's new in Traefik 3, 2.x to 3.0 migrations, Kubernetes Gateway API, WebAssembly (Cloud Native Wasm), HTTP3, Tailscale, OpenTelemetry, and much more!
Check out the live recording of the complete show from June 6, 2024 on YouTube (Stream 269). Includes demos.
I took a break after six years of weekly live streaming this month, and we're preparing for a return in September. I also took a break from this newsletter and had a wonderful 2-week vacation with family, something I haven't done in over a decade!
In this week's podcast, Bret is joined by Shahar Azulay, Groundcover CEO and Co-Founder, to discuss their new approach to fully observe K8s and its workloads with a "hybrid observability architecture."
Groundcover is a new, cloud-native, eBPF-based platform that designed a new model for how observability solutions are architected and priced. It is a product that can drastically reduce your monitoring, logging, and tracing costs and complexity, it stores all its data in your clusters and only needs one agent per host for full observability and APM.
We dig into the deployment, architecture, and how it all works under the hood.
Be sure to check out the live recording of the complete show and all the demos from June 27, 2024 on YouTube (Stream 272).
Get CNDO Weekly
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
💰 Sale on All Courses
I celebrated my birthday earlier this month and I thought I'd pass on some birthday celebration to you. I've put all my courses on sale at the lowest price Udemy allows. The sale lasts through Tuesday 20 Aug.
Use coupon code BIRTHDAY24 or click on the coupon links below. Sign up for another course and pass along the savings to friends and colleagues.
🔴 Live show: Kubernetes observability startup created a cheaper architecture for deploying it
What if you could drastically reduce your monitoring, logging, and tracing costs and complexity by using a SaaS product that stores all its data in your clusters and only needs one agent per host for full observability and APM? Groundcover is a new, cloud-native, eBPF-based platform that designed a new model for how observability solutions are architected and priced.
Groundcover CEO and Co-Founder Shahar Azulay joins me to discuss their new approach to fully observe K8s and its workloads with a “hybrid observability architecture” that can put most, if not all, of the solution in your cloud and clusters while still remaining an easy-to-manage SaaS at its heart. We’ll dig into the deployment, architecture, and how it all works under the hood.
Click the dinner bell 🔔 to get your reminder. You can also add it to your calendar here.
Get CNDO Weekly
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.
Email sent! Check your inbox to complete your signup.
In this latest podcast, Nirmal and I are joined by friend of the show, Matt Williams, to learn how to run your own local ChatGPT clone and GitHub Copilot clone with Ollama.
We've designed this conversation for tech people like myself, who are no strangers to using LLMs in web products like chat GPT, but are curious about running open source generative AI models locally and how they might set up their Docker environment to develop things on top of these open source LLMs.
Matt walks us through all the parts of this solution, and with detailed explanations, shows us how Ollama can make it easier on Mac, Windows, and Linux to set up LLM stacks.
What does it take to operate Machine Learning workloads as a DevOps Engineer? Maria Vechtomova, a MLOps Tech Lead, joins us to discuss the obvious and not-so-obvious differences between a MLOps Engineer and traditional DevOps jobs. She's also the co-founder of Marvelous MLOps.
Click the dinner bell in YouTube 🔔 to get your reminder. You can also add it to your calendar here.
👋 Monthly High Fivers Chat (membership benefit)
Our High Fiver Chat is tomorrow (19th) at 12:00 PM US EDT (UTC-4). We'll use the High Fivers Discord voice channel. Our monthly High Fiver chat is a group call with me once a month to talk about whatever's on your technical mind, get feedback on your tech stack, and learn about what others are working on. You can join High Fivers on YouTube or Patreon.
Get CNDO Weekly
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.
Email sent! Check your inbox to complete your signup.
We released another great podcast last Friday (6/14) where Nirmal and I talk with our friend of the show, Matt Williams, to learn how to run your own local ChatGPT clone and GitHub Copilot clone with Ollama and Docker's "GenAI Stack," to build apps on top of open source LLMs.
We designed this conversation for tech people like myself, who are no strangers to using LLMs in web products like chat GPT, but are curious about running open source generative AI models locally and how they might set up their Docker environment to develop things on top of these open source LLMs.
Matt walks us through all the parts of this solution, and with detailed explanations, shows us how Ollama can make it easier on Mac, Windows, and Linux to set up LLM stacks.
You can also check out the live recording of the complete show from April 18, 2024 on YouTube (Ep. 262).
🐦 Tweet of the week
I missed the Docker Captain Summit due to a bad cold, but they made sure I wasn't forgotten. Watch this short!
I'm on a tear lately with great show guests, so don't miss out (see below.)
Next week, I'm headed to Lisbon, Portugal, for a Docker Captain Summit. I hope to spend some time discussing and testing how GenAI can make Docker and container development easier, so stay tuned!
I've also had access to GitHub Copilot Workspace for weeks, and it looks amazing. I'll likely talk about my experiences with it in a future YouTube Live Q&A.
🔴 Live show: Traefik 3.0 upgrade and new features walkthrough with CTO Emile Vauge (Ep 269)
Traefik 3.0 is here, and we’re digging into how to upgrade and big new features with founder and CTO Emile Vauge. We’ll talk 2.x to 3.0 migrations, Kubernetes Gateway API, WebAssembly (Cloud Native Wasm), HTTP3, Tailscale, OpenTelemetry, and more!
Bret is joined by Jasper Paul and Vinoth Kanagaraj, observability experts and Site24x7 Product Managers, to discuss achieving end-to-end visibility for applications on Kubernetes infrastructure. We answer questions on all things monitoring, OpenTelemetry, and KPIs for DevOps and SREs.
We talk about the industry's evolution from monitoring to full observability platforms, as well as adjacent topics for helping you with your own Kubernetes and application monitoring, including going through some of the most useful metrics in Kubernetes and AI's role in metric analysis and alerting humans.
Be sure to check out the live recording of the complete show from April 25, 2024 on YouTube (Ep. 263). Includes demos.
🐦 Tweet of the week
Gruvbox has been one of my default shell + editor themes for years. It's flexibility and options sold me on it, and it was also one of the first, back in 2017 days, that supported true color, iTerm profiles, light/dark, bright/dim, etc.
May 23rd I did an ask-me-anything show with Cristi Cotovan, a developer who is learning DevOps. We were talking about his progress and went down a rabbit hole about how frustrating and overwhelming it can be. He's not alone. Soooo many options and choices to make!
If you struggle with all the options out there and even those presented on my show, then this should be a great show for you.
We released podcast #161 where Nirmal and I talk with Neil Cresswell and Steven Kang from Portainer to look at k2d, a new project that enables us to leverage Kubernetes tooling to manage Docker containers on tiny devices at the far edge.
K2d stands for Kubernetes to Docker, which is a bit of a crazy idea – it's a partial Kubernetes API running on top of Docker Engine without needing a full Kubernetes control plane. If you work with very small devices, including older Raspberry PIs, 32-bit machines, maybe industry sensors and the infrastructure we now call 'edge', the container hardware is often hard for you to make simple, reliable, and automated all at the same time.
So this project uses less resources than a single node K3S and still allows you to use Kubernetes tools to deploy and manage your containers, which are in fact just running on a Docker Engine with no full-fledged Kubernetes distribution going on there.
We get into far more detail on the architecture, the Portainer team's motivations for this new open source project and what its limitations are, because it's not real Kubernetes, so it can't do everything.
Be sure to check out the live recording of the complete show from March 28, 2024 on YouTube (Ep. 260). Includes demos.
Watch me and my guest Kyle Quest, DockerSlim Founder, shrink container images and debug Docker and Kubernetes containers using DockerSlim (now MinToolKit).
We'll show off how to slim down your existing images with various options (including distroless images like Chainguard Images and Nix), and use the new "mint debug" feature to exec into existing images and containers on Kubernetes, Docker, Podman, and containerd.
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
🗓️ What's new this week
The LLM tooling onslaught continues! Continue is on the show this week and I just got access to GitHub Copilot Workspace technical preview and will report back on it soon.
There's also Devin, GPTScript (which we're working to get on the show), Kapa.ai (which I'm playing with for my Udemy courses), Phind, and many more... so my big question is:
Which of these tools will we be using for DevOps-specific roles in two years?
We're all working with a near-unlimited amount of tooling, but even if you want to be a AI tooling leader in DevOps, who's got time to thoroughly test all these tools, and which ones will still be here in 2 years? If I've missed a DevOps-specific AI tool, or you have opinions, hit me up on X or Discord.
🔴 Thursday's show: Free LLM for VS Code and JetBrains, replace ChatGPT and Copilot: Continue.dev
We have continue.dev co-founder, Nate Sesti, on the show to walk through a open-source replacement for GitHub Copilot. Continue lets you use a set of open source and closed source LLMs in JetBrains and VSCode IDEs for adding AI to your coding workflow without leaving the editor. Join Nirmal and me to ask Nate questions.
In my latest podcast, Nirmal and I are joined by Dan Lorenc from Chainguard, who walks us through Chainguard's approach to building secure, minimal container images for popular open source software.
We talk about why it's important to have secure and minimal container images. Dan explains how Chainguard helps eliminate the pain of CVEs, laggy software updates and patches, and much more. Chainguard is now available also on Docker Hub. spent The first part of the show discussed the week's big news: the XZ supply chain attack, and Dan was the best man to explain it. During this jam-packed show, they also touched on CVEs, things you can do to reduce the attack surface, SLSA, and more.
👋 Monthly High Fivers Chat
Our High Fiver Chat is tomorrow at 12:00 PM US EDT (UTC-4). We'll use the High Fivers Discord voice channel. Our monthly High Fiver's chat is a Zoom call with me once a month to talk about whatever's on your technical mind. Learn more here.
🐦 Tweet of the week
Related to my last newsletter on keeping up with CNCF project releases:
Cloud Native DevOps education. Bestselling courses, live streams, and podcasts on DevOps and containers, from a Docker Captain.
Email sent! Check your inbox to complete your signup.
No spam. Unsubscribe anytime.
🗓️ What's new this week (year?)
The Cloud Native world (Projects and Members of the CNCF) is too big for one person to know well.
That's an understatement. It's too big for one person to even keep track of new feature releases. We've all got our focus areas, and my focus has always been around runtimes (docker, containerd), orchestration essentials (Compose, Swarm, Kubernetes core, ingress, podspec, admin tools) and build-and-deploy automation (GitOps, CI, GitHub Actions, etc.).
But in 2024, even that feels like too much to keep up with, never mind trying to track how AI is changing things.
Here's my tweet to get feedback on how others keep up with project updates.
Some things I do for CNCF project updates, but no idea if their are better ways: - Subscribe to GitHub repo releases - Google Alerts for blog URLs - Listen to first 10min of Google's Kubernetes Podcast
Kubernetes 1.29 and 1.30: Not a lot that the avg user would care about, but a few I was interested in included the new PV type ReadWriteOncePod, KMS V2 support, and multiple cluster autoscaling improvements.
Argo CD 1.10: ApplicationSet templates, Server-side diff, and a dozen other noteworthy changes.
OCI Distribution (registries) & Image 1.1: Official Artifacts support, referrers API, zstd compression support.
🔴 Streaming Thursday May 2nd: Cloud Native DevOps Q&A
I don't know about you but I feel like we're overdue for a Q&A show. So, the wait is over. This week's show will be our ask-me-anything format dedicated to helping you address your questions and issues. Of course, as usual, our focus is cloud native DevOps topics around containerization, orchestration, automation, infrastructure, and more.
Click the dinner bell 🔔 to get your reminder. You can also add it to your calendar here.
👋 Monthly High Fivers Chat
Mark your calendar for May's High Fiver Chat. When: Wednesday, May 15 12:00 PM Eastern Time (US/Canada) (UTC-4)
We'll use the High Fivers Discord voice channel. What is High Fiver's Chat? It's a Zoom call with me once a month to talk about whatever's on your technical mind. Learn more here.
🐦 Tweet of the week
I learned from @IronicBadger that @Tailscale has an Apple TV app that works as an exit node, and it's a game changer for remoting into the home lab and streaming from every device I travel with https://t.co/eRb5ezasVH