Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Docker Model Runner now included with the Universal Blue family

Par : Yiwen Xu
16 décembre 2025 à 12:24

Running large language models (LLMs) and other generative AI models can be a complex, frustrating process of managing dependencies, drivers, and environments. At Docker, we believe this should be as simple as docker model run.

That’s why we built Docker Model Runner, and today, we’re thrilled to announce a new collaboration with Universal Blue. Thanks to the fantastic work of these contributors,  Docker Model Runner is now included in OSes such as Aurora and Bluefin, giving developers a powerful, out-of-the-box AI development environment.

What is Docker Model Runner?

For those who haven’t tried it yet, Docker Model Runner is our new “it just works” experience for running generative AI models.

Our goal is to make running a model as simple as running a container.

Here’s what makes it great:

  • Simple UX: We’ve streamlined the process down to a single, intuitive command: docker model run <model-name>.
  • Broad GPU Support: While we started with NVIDIA, we’ve recently added Vulkan support. This is a big deal—it means Model Runner works on pretty much any modern GPU, including AMD and Intel, making AI accessible to more developers than ever.
  • vLLM: Perform high-throughput inference with an NVIDIA GPU

The Perfect Home for Model Runner

If you’re new to it, Universal Blue is a family of next-generation, developer-focused Linux desktops. They provide modern, atomic, and reliable environments that are perfect for “cloud-native” workflows.

As Jorge Castro who leads developer relations at Cloud Native Computing Foundation explains, “Bluefin and Aurora are reference architectures for bootc, which is a CNCF Sandbox Project. They are just two examples showing how the same container pattern used by application containers can also apply to operating systems. Working with AI models is no different – one common set of tools, built around OCI standards.”

The team already ships Docker as a core part of its developer-ready experience. By adding Docker Model Runner to the default installation (specifically in the -dx mode for developers), they’ve created a complete, batteries-included AI development environment.

There’s no setup, no config. If you’re on Bluefin/Aurora, you just open a terminal and start running models.

Get Started Today

If you’re running the latest Bluefin LTS, you’re all set when you turn on developer mode. The Docker engine and Model Runner CLI are already installed and waiting for you. Aurora’s enablement instructions are documented here.

You can run your first model in seconds:

DMR Aurora image 1

This command will download the model (if not already cached) and run it, ready for you to interact with.

If you’re on another Linux, you can get started just as easily. Just follow the instructions on our GitHub repository.

What’s Next?

This collaboration is a fantastic example of community-driven innovation. We want to give a huge shoutout to the greater bootc enthusiast community for their forward-thinking approach and for integrating Docker Model Runner so quickly.

This is just the beginning. We’re committed to making AI development accessible, powerful, and fun for all developers.

How You Can Get Involved

The strength of Docker Model Runner lies in its community, and there’s always room to grow. We need your help to make this project the best it can be. To get involved, you can:

  • Star the repository: Show your support and help us gain visibility by starring the Docker Model Runner repo.
  • Contribute your ideas: Have an idea for a new feature or a bug fix? Create an issue to discuss it. Or fork the repository, make your changes, and submit a pull request. We’re excited to see what ideas you have!
  • Spread the word: Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.

We’re incredibly excited about this new chapter for Docker Model Runner, and we can’t wait to see what we can build together. Let’s get to work!

Docker Model Runner now supports vLLM on Windows

Par : Yiwen Xu
11 décembre 2025 à 16:12

Great news for Windows developers working with AI models: Docker Model Runner now supports vLLM on Docker Desktop for Windows with WSL2 and NVIDIA GPUs!

Until now, vLLM support in Docker Model Runner was limited to Docker Engine on Linux. With this update, Windows developers can take advantage of vLLM’s high-throughput inference capabilities directly through Docker Desktop, leveraging their NVIDIA GPUs for accelerated local AI development.

What is Docker Model Runner?

For those who haven’t tried it yet, Docker Model Runner is our new “it just works” experience for running generative AI models.

Our goal is to make running a model as simple as running a container.

Here’s what makes it great:

  • Simple UX: We’ve streamlined the process down to a single, intuitive command: docker model run <model-name>.
  • Broad GPU Support: While we started with NVIDIA, we’ve recently added Vulkan support. This is a big deal—it means Model Runner works on pretty much any modern GPU, including AMD and Intel, making AI accessible to more developers than ever.
  • vLLM: Perform high-throughput inference with an NVIDIA GPU

What is vLLM?

vLLM is a high-throughput inference engine for large language models. It’s designed for efficient memory management of the KV cache and excels at handling concurrent requests with impressive performance. If you’re building AI applications that need to serve multiple requests or require high-throughput inference, vLLM is an excellent choice. Learn more here.

Prerequisites

Before getting started, make sure you have the prerequisites for GPU support:

  • Docker Desktop for Windows (starting with Docker Desktop 4.54)
  • WSL2 backend enabled in Docker Desktop
  • NVIDIA GPU with updated drivers with compute capability >= 8.0
  • GPU support configured in Docker Desktop

Getting Started

Step 1: Enable Docker Model Runner

First, ensure Docker Model Runner is enabled in Docker Desktop. You can do this through the Docker Desktop settings or via the command line:

docker desktop enable model-runner --tcp 12434

Step 2: Install the vLLM Backend

In order to be able to use vLLM, install the vLLM runner with CUDA support:

docker model install-runner --backend vllm --gpu cuda
vLLM Windows image 1

Step 3: Verify the Installation

Check that both inference engines are running:

docker model install-runner --backend vllm --gpu cuda

You should see output similar to:

Docker Model Runner is running

Status:
llama.cpp: running llama.cpp version: c22473b
vllm: running vllm version: 0.12.0

Step 4: Run a Model with vLLM

Now you can pull and run models optimized for vLLM. Models with the -vllm suffix on Docker Hub are packaged for vLLM:

docker model run ai/smollm2-vllm "Tell me about Docker."
vLLM Windows image 2

Troubleshooting Tips

GPU Memory Issues

If you encounter an error like:

ValueError: Free memory on device (6.96/8.0 GiB) on startup is less than desired GPU memory utilization (0.9, 7.2 GiB).

You can configure the GPU memory utilization for a specific mode:

docker model configure --gpu-memory-utilization 0.7 ai/smollm2-vllm

This reduces the memory footprint, allowing the model to run alongside other GPU workloads.

Why This Matters

This update brings several benefits for Windows developers:

  • Production parity: Test with the same inference engine you’ll use in production
  • Unified workflow: Stay within the Docker ecosystem you already know
  • Local development: Keep your data private and reduce API costs during development

How You Can Get Involved

The strength of Docker Model Runner lies in its community, and there’s always room to grow. We need your help to make this project the best it can be. To get involved, you can:

  • Star the repository: Show your support and help us gain visibility by starring the Docker Model Runner repo.
  • Contribute your ideas: Have an idea for a new feature or a bug fix? Create an issue to discuss it. Or fork the repository, make your changes, and submit a pull request. We’re excited to see what ideas you have!
  • Spread the word: Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.

We’re incredibly excited about this new chapter for Docker Model Runner, and we can’t wait to see what we can build together. Let’s get to work!

Announcing vLLM v0.12.0, Ministral 3 and DeepSeek-V3.2 for Docker Model Runner

Par : Yiwen Xu
5 décembre 2025 à 13:28

At Docker, we are committed to making the AI development experience as seamless as possible. Today, we are thrilled to announce two major updates that bring state-of-the-art performance and frontier-class models directly to your fingertips: the immediate availability of Mistral AI’s Ministral 3 and DeepSeek-V3.2, alongside the release of vLLM v0.12.0 on Docker Model Runner.

Whether you are building high-throughput serving pipelines or experimenting with edge-optimized agents on your laptop, today’s updates are designed to accelerate your workflow.

Meet Ministral 3: Frontier Intelligence, Edge Optimized

vLLM 2nd blog image 1

While vLLM powers your production infrastructure, we know that development needs speed and efficiency right now. That’s why we are proud to add Mistral AI’s newest marvel, Ministral 3, to the Docker Model Runner library on Docker Hub.

Ministral 3 is Mistral AI’s premier edge model. It packs frontier-level reasoning and capabilities into a dense, efficient architecture designed specifically for local inference. It is perfect for:

  • Local RAG applications: Chat with your docs without data leaving your machine.
  • Agentic Workflows: Fast reasoning steps for complex function-calling agents.
  • Low-latency prototyping: Test ideas instantly without waiting for API calls.

DeepSeek-V3.2: The Open Reasoning Powerhouse

vLLM 2nd blog image 2

We are equally excited to introduce support for DeepSeek-V3.2. Known for pushing the boundaries of what open-weights models can achieve, the DeepSeek-V3 series has quickly become a favorite for developers requiring high-level reasoning and coding proficiency.

DeepSeek-V3.2 brings Mixture-of-Experts (MoE) architecture efficiency to your local environment, delivering performance that rivals top-tier closed models. It is the ideal choice for:

  • Complex Code Generation: Build and debug software with a model specialized in programming tasks.
  • Advanced Reasoning: Tackle complex logic puzzles, math problems, and multi-step instructions.
  • Data Analysis: Process and interpret structured data with high precision.

Run Them with One Command

With Docker Model Runner, you don’t need to worry about complex environment setups, python dependencies, or weight downloads. We’ve packaged both models so you can get started immediately.

To run Ministral 3:

docker model run ai/ministral3

To run DeepSeek-V3.2:

docker model run ai/deepseek-v3.2-vllm

These commands automatically pull the model, set up the runtime, and drop you into an interactive chat session. You can also point your applications to them using our OpenAI-compatible local endpoint, making them drop-in replacements for your cloud API calls during development.

vLLM v0.12.0: Faster, Leaner, and Ready for What’s Next

vLLM blog 1

We are excited to highlight the release of vLLM v0.12.0. vLLM has quickly become the gold standard for high-throughput and memory-efficient LLM serving, and this latest version raises the bar again.

Version 0.12.0 brings critical enhancements to the engine, including:

  • Expanded Model Support: Day-0 support for the latest architecture innovations, ensuring you can run the newest open-weights models (like DeepSeek V3.2 and Ministral 3) the moment they drop.
  • Optimized Kernels: Significant latency reductions for inference on NVIDIA GPUs, making your containerized AI applications snappier than ever.
  • Enhanced PagedAttention: Further optimizations to memory management, allowing you to batch more requests and utilize your hardware to its full potential.

Why This Matters

The combination of Ministral 3, DeepSeek-V3.2, and vLLM v0.12.0 represents the maturity of the open AI ecosystem.

You now have access to a serving engine that maximizes data center performance, alongside a choice of models to fit your specific needs—whether you prioritize the edge-optimized speed of Ministral 3 or the deep reasoning power of DeepSeek-V3.2. All of this is easily accessible via Docker Model Runner.

How You Can Get Involved

The strength of Docker Model Runner lies in its community, and there’s always room to grow. We need your help to make this project the best it can be. To get involved, you can:

  • Star the repository: Show your support and help us gain visibility by starring the Docker Model Runner repo.
  • Contribute your ideas: Have an idea for a new feature or a bug fix? Create an issue to discuss it. Or fork the repository, make your changes, and submit a pull request. We’re excited to see what ideas you have!
  • Spread the word: Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.

We’re incredibly excited about this new chapter for Docker Model Runner, and we can’t wait to see what we can build together. Let’s get to work!

Docker Model Runner Integrates vLLM for High-Throughput Inference

Par : Yiwen Xu
20 novembre 2025 à 13:44

Expanding Docker Model Runner’s Capabilities

Today, we’re excited to announce that Docker Model Runner now integrates the vLLM inference engine and safetensors models, unlocking high-throughput AI inference with the same Docker tooling you already use.

When we first introduced Docker Model Runner, our goal was to make it simple for developers to run and experiment with large language models (LLMs) using Docker. We designed it to integrate multiple inference engines from day one, starting with llama.cpp, to make it easy to get models running anywhere.

Now, we’re taking the next step in that journey. With vLLM integration, you can scale AI workloads from low-end to high-end Nvidia hardware, without ever leaving your Docker workflow.

Why vLLM?

vLLM blog 1

vLLM is a high-throughput, open-source inference engine built to serve large language models efficiently at scale. It’s used across the industry for deploying production-grade LLMs thanks to its focus on throughput, latency, and memory efficiency.

Here’s what makes vLLM stand out:

  • Optimized performance: Uses PagedAttention, an advanced attention algorithm that minimizes memory overhead and maximizes GPU utilization.
  • Scalable serving: Handles batch requests and streaming outputs natively, perfect for interactive and high-traffic AI services.
  • Model flexibility: Works seamlessly with popular open-weight models like GPT-OSS, Qwen3, Mistral, Llama 3, and others in the safetensors format.

By bringing vLLM to Docker Model Runner, we’re bridging the gap between fast local experimentation and robust production inference.

How vLLM Works

Running vLLM models with Docker Model Runner is as simple as installing the backend and running your model, no special setup required.

Install Docker Model Runner with vLLM backend:

docker model install-runner --backend vllm --gpu cuda

Once the installation finishes, you’re ready to start using it right away:

docker model run ai/smollm2-vllm "Can you read me?"

Sure, I am ready to read you.

Or access it via API:

curl --location 'http://localhost:12434/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
  "model": "ai/smollm2-vllm",
  "messages": [
    {
      "role": "user",
      "content": "Can you read me?"
    }
  ]
}'

Note that there’s no reference to vLLM in the HTTP request or CLI command.

That’s because Docker Model Runner automatically routes the request to the correct inference engine based on the model you’re using, ensuring a seamless experience whether you’re using llama.cpp or vLLM.

Why Multiple Inference Engines?

Until now, developers had to choose between simplicity and performance. You could either run models easily (using simplified portable tools like Docker Model Runner with llama.cpp) or achieve maximum throughput (with frameworks like vLLM).

Docker Model Runner now gives you both.

You can:

  • Prototype locally with llama.cpp.
  • Scale to production with vLLM.

Use the same consistent Docker commands, CI/CD workflows, and deployment environments throughout.

This flexibility makes Docker Model Runner a first in the industry — no other tool lets you switch between multiple inference engines within a single, portable, containerized workflow.

By unifying these engines under one interface, Docker is making AI truly portable, from laptops to clusters, and everything in between.

Safetensors (vLLM) vs. GGUF (llama.cpp): Choosing the Right Format

With the addition of vLLM, Docker Model Runner is now compatible with the two most dominant open-source model formats: Safetensors and GGUF. While Model Runner abstracts the complexity of setting up the engines, understanding the difference between these formats helps in choosing the right tool for your infrastructure.

  • GGUF (GPT-Generated Unified Format): The native format for llama.cpp, GGUF is designed for high portability and quantization. It is excellent for running models on commodity hardware where memory bandwidth is limited. It packages the model architecture and weights into a single file.
  • Safetensors: The native format for vLLM and the modern standard for high-end inference, safetensors is built for high-throughput performance.

Docker Model Runner intelligently routes your request: if you pull a GGUF model, it utilizes llama.cpp; if you pull a safetensors model, it leverages the power of vLLM. With Docker Model Runner, both can be pushed and pulled as OCI images to any OCI registry.

vLLM-compatible models on Docker Hub

vLLM models are in safetensors format. Some early safetensors models available on Docker Hub:

vLLM blog 2

Available Now: x86_64 with Nvidia

Our initial release is optimized for and available on systems running the x86_64 architecture with Nvidia GPUs. Our team has dedicated its efforts to creating a rock-solid experience on this platform, and we’re confident you’ll feel the difference.

What’s Next?

This launch is just the beginning. Our vLLM roadmap is focused on two key areas: expanding platform access and continuous performance tuning.

  • WSL2/Docker Desktop compatibility: We know that a seamless “inner loop” is critical for developers. We are actively working to bring the vLLM backend to Windows via WSL2. This will allow you to build, test, and prototype high-throughput AI applications Docker Desktop with the same workflow you use in Linux environments, starting with Nvidia Windows machines.
  • DGX Spark compatibility: We are optimizing Model Runner for different kinds of hardware. We are working to add compatibility for Nvidia DGX systems.
  • Performance Optimization: We’re also actively tracking areas for improvement. While vLLM offers incredible throughput, we recognize that its startup time is currently slower than llama.cpp’s. This is a key area we are looking to optimize in future enhancements to improve the “time-to-first-token” for rapid development cycles.

Thank you for your support and patience as we grow.

How You Can Get Involved

The strength of Docker Model Runner lies in its community, and there’s always room to grow. We need your help to make this project the best it can be. To get involved, you can:

  • Star the repository: Show your support and help us gain visibility by starring the Docker Model Runner repo.
  • Contribute your ideas: Have an idea for a new feature or a bug fix? Create an issue to discuss it. Or fork the repository, make your changes, and submit a pull request. We’re excited to see what ideas you have!
  • Spread the word: Tell your friends, colleagues, and anyone else who might be interested in running AI models with Docker.

We’re incredibly excited about this new chapter for Docker Model Runner, and we can’t wait to see what we can build together. Let’s get to work!

Dynamic MCPs with Docker: Stop Hardcoding Your Agents’ World

Par : Yiwen Xu
6 novembre 2025 à 20:51

The MCP protocol is almost one year old and during that time, developers have built thousands of new MCP servers. Thinking back to MCP demos from six months ago, most developers were using one or two local MCP servers, each contributing just a handful of tools. Six months later and we have access to thousands of tools, and a new set of issues.

  1. Which MCP servers do we trust?
  2. How do we avoid filling our context with tool definitions that we won’t end up needing?
  3. How do agents discover, configure, and use tools efficiently and autonomously?

With the latest features in Docker MCP Gateway, including Smart Search and Tool Composition, we’re shifting from “What do I need to configure?” to “What can I empower agents to do?” 

This week, Anthropic also released a post about building more efficient agents, and they have called out many of the same issues that we’ll be discussing in this post. Now that we’ve made progress towards having tools, we can start to think more about effectively using tools.

With dynamic MCPs, agents don’t just search for or add tools, but write code to compose new ones within a secure sandbox, improving both tool efficiency and token usage.

Enabling Agents to Find, Add, and Configure MCPs Dynamically with Smart Search 

If you think about how we configure MCPs today, the process is not particularly agentic. Typically, we leave the agent interface entirely, do some old-school configuration hacking (usually editing a JSON file of some kind), and then restart our agent session to check if the MCPs have become available. As the number of MCP servers grows, is this going to work?

So what prevents our agents from doing more to help us discover useful MCP servers?

We think that Docker’s OSS gateway can help here. As the gateway manages the interface between an agent and any of the MCP servers in the gateway’s catalog, there is an opportunity to mediate that relationship in new ways. 

Out of the box, the gateway ships with a default catalog, the Docker MCP Catalog,  including over 270 curated servers and of course the ability to curate your own private catalogs (e.g. using servers from the community registry). And because it runs on Docker, you can pull and run any of them with minimal setup. That directly tackles the first friction point: discovery of trusted MCP servers. 

Dynamic tools fig1

Figure 1: The Docker MCP Gateway now includes mcp-find and mcp-add, new Smart Search features that let agents discover and connect to trusted MCP servers in the Docker MCP Catalog, enabling secure, dynamic tool usage.

However, the real key to dynamic MCPs is a small but crucial adjustment to the agent’s MCP session. The gateway provides a small set of primordial tools that the agent uses to search the catalog and to either add or remove servers from the current session. Just as in the post from Anthropic, which suggests a search_tools tool, we have added new tools to help the agent manage their MCP servers.

  • mcp-find: Find MCP servers in the current catalog by name or description. Return matching servers with their details.
  • mcp-add: Add a new MCP server to the session. The server must exist in the catalog.

With this small tweak, the agent can now help us negotiate a new MCP session. To make this a little more concrete, we’ll show an agent connected to the gateway asking for the DuckDuckGo MCP and then performing a search.

Dynamic tools fig2

Figure 2: A demo of using mcp-find and mcp-add to connect to the DuckDuckGo MCP server and run a search

Configuring MCP Servers with Agent-Led Workflows

In the example above, we started by connecting our agent to the catalog of MCPs (see docker mcp client connect –help for options). The agent then adds a new MCP server to the current session. To be clear, the duckduckgo MCP server is quite simple. Since it does not require any configuration, all we needed to do was search the catalog, pull the image from a trusted registry, and spin up the MCP server in the local docker engine.

However, some MCP servers will require inputs before they can start up. For example, remote MCP servers might require that the user go through an OAuth flow. In the next example, the gateway responds by requesting that we authorize this new MCP server. Now that MCP supports elicitations, and frameworks like mcp-ui allow MCPs to render UI elements into the chat, we have begun to optimize these flows based on client-side capabilities.

Dynamic tools fig3

Figure 3: Using mcp-find and mcp-add to connect to the Notion MCP server, including an OAuth flow

Avoid An Avalanche of Tools: Dynamic Tool Selection

In the building more efficient agents post, the authors highlight two ways that tools currently make token consumption less efficient.

  1. Tool definitions in the context window
  2. Intermediate tool results

The result is the same in both cases. Too many tokens are not being sent to the model. It takes surprisingly few tools for the context window to accumulate hundreds of thousands of tokens of nothing but tool definition.

Again, this is something we can improve. In the mcp gateway project, we’ve started distinguishing between tools that are available to a find tool, and ones that are added to the context window. Just as we’re giving agents tools for server selection, we can give them new ways to select tools.

Dynamic tools fig4

Figure 4: Dynamic Tools in action: Tools can now be actively selected, avoiding the need to load all available tools into every LLM request.

The idea is conceptually simple. We are providing an option to allow agents to add servers that do not automatically put their tools into the context window. With today’s agents, this means adding MCP servers that don’t return tool definitions in tools/list requests, but still make them available to find tool calls. This is easy to do because we have an MCP gateway to mediate tools/list requests and to inject new task-oriented find tools. New primordial tools like mcp-exec and mcp-find provide agents with new ways to discover and use MCP server tools.

Once we start to think about tool selection differently, it opens up a range of possibilities.

Using Tools in a new way: From Tool Calls to Tool Composition with code-mode

The idea of “code mode” has been getting a lot of attention since CloudFlare posted about a better way to use Tools several weeks ago. The idea actually dates back to a paper “CodeAct: Your LLM Agent Acts Better when Generating Code“, which proposed that LLMs could improve agent-oriented tasks by first consolidating agent actions into code. The recent post from Anthropic also frames code mode as a way to improve agent efficiency by reducing the number of tool definitions and tool outputs in the context window.

We’re really excited by this idea. By making it possible for agents to “code” directly against MCP tool interfaces, we can provide agents with “code-mode” tools that use the tools in our current MCP catalog in new ways. By combining mcp-find with code-mode, the agent can still access a large, and dynamic, set of available tools while putting just one or two new tools into the context window. Our current code-mode tool writes javascript and takes available MCP servers as parameters.

code-mode: Create a JavaScript-enabled tool that can call tools from any of the servers listed in the servers parameter.

However, this is still code written by an agent. If we’re going to run this code, we’re going to want it to run in a sandbox. Our MCP servers are already running in Docker containers, and the code mode sandbox is no different. In fact, it’s an ideal case because this container only needs access to other MCP servers! The permissions for accessing external systems are already managed at the MCP layer.

This approach offers three key benefits:

  • Secure by Design: The agent stays fully contained within a sandbox. We do not give up any of the benefits of sandboxing. The code-mode tool uses only containerized MCP servers selected from the catalog.
  • Token and Tool Efficiency: The tools it uses do not have to be sent to the model on every request. On subsequent turns, the model just needs to know about one new code-mode tool. In practice, this can result in hundreds of thousands of fewer tokens being sent to the model on each turn.
  • State persistence: Using volumes to manage how state is managed across tool calls, and to track intermediate results that need not, or even should not be sent to the model.

A popular illustration of this pattern is building a code mode tool using the GitHubofficial MCP servers. The GitHub server happens to ship with a large number of tools, so code-mode will have a dramatic impact. In the example below, we’re prompting an agent to create a new code-mode tool out of the Github-official and markdownify MCP servers.

Dynamic tools fig5

Figure 5: Using the MCP code-mode to write code to call tools from the GitHub Official and Markdownify MCP servers

The combination of Smart Search and Tool Composition unlocks dynamic, secure use of MCPs. Agents can now go beyond simply finding or adding tools; they can write code to compose new tools, and run them safely in a secure sandbox. 

The result: faster tool discovery, lower token usage, fewer manual steps, and more focused time for developers.

Workflow

Before: Static MCP setup

After: Dynamic MCPs via Docker MCP Gateway

Impact

Tool discovery

Manually browse the MCP servers

mcp-find searches a Docker MCP Catalog (230+ servers) by name/description

Faster discovery

Adding tools

Enable the MCP servers manually

mcp-add pulls only the servers an agent needs into the current session

Zero manual config; just-in-time tooling

Authentication

Configure the MCP servers ahead of time

Prompt user to complete OAuth when a remote server requires it

Some clients starting to support things like mcp elicitations and UX like mcp-ui for smoother onboarding flows

Tool composition

Agent generated tool calls; tool definitions are sent to the model

With code-mode , agents write code that use  multiple MCP tools

Multi-tool workflows and unified outputs

Context size

Load lots of unused tool definitions

Keep only the tools actually required for the task

Lower token usage and latency

Future-proofing

Static integrations

Dynamic, composable tools with sandboxed scripting

Ready for evolving agent behaviors and catalogs

Developer involvement

Constant context switching and config hacking

Agents self-serve: discover, authorize, and orchestrate tools

Fewer manual steps; better focus time

Table 1: Summary of Benefits from Docker’s Smart Search and Tool Composition for Dynamic MCPs 

From Docker to Your Editor: Running dynamic MCP tools with cagent and ACP

Another new component of the Docker platform is cagent, our open source agent builder & runtime, which provides a simple way to build and distribute new agents. The latest version of cagent now supports the Agent Client Protocol which allows developers to add custom agents to ACP-enabled editors like neovim, or Zed, and then to share these agents by pushing them to or pulling them from Docker Hub.

This means that we can now build agents that know how to use features like smart search tools or code mode, and then embed these agents in ACP-powered editors using cagent. Here’s an example agent, running in neovim, that helps us discover new tools relevant to whatever project we are currently editing.

Dynamic tools fig6

Figure 6: Running Dynamic MCPs in Neovim via Agent Client Protocol and a custom agent built with cagent, preconfigured with MCP server knowledge

In their section on state persistence and skills, the folks at Anthropic also hint at the idea that dynamic tools and code mode execution bring us closer to a world where over time, agents accumulate code and tools that work well together. Our current code-mode tool does not yet save the code it writes to the project but we’ll be working on this here.

For the neovim example above, we have used ACP support in the code companion plugin. Also, please check out the cagent adapter in this repo. For Zed, see their doc on adding custom agents and of course, try out cagent acp agent.yaml with your own custom agent.yaml file.

Getting Started with Dynamic MCPs Using Smart Search and Tool Composition

Dynamic tools are now available in the mcp gateway project.  Unless you are running the gateway with an explicit set of features (using the existing –servers flag), then these tools are available to your agent by default. The dynamic tools feature can also be disabled using docker mcp feature disable dynamic-tools. This is a feature that we’re actively developing so please try it out and let us know what you think by opening an issue, or starting a discussion in our repo. 

Get started by connecting your favorite client to the MCP gateway using docker mcp client connect, or by adding a connection using the “Clients” tab in the Docker Desktop MCP Toolkit panel.

Summary

The Docker MCP Toolkit combines a trusted runtime (the docker engine), with catalogs of MCP servers. Beginning with Docker Desktop 4.50, we are now extending the mcp gateway interface with new tools like mcp-find, mcp-add, and code-mode, to enable agents to discover MCP servers more effectively, and even to use these servers in new ways.

Whether it’s searching or pulling from a trusted catalog, initiating an OAuth flow, or scripting multi-tool workflows in a sandboxed runtime, agents can now do more on their own. And that takes us a big step closer to the agentic future we’ve been promised! 

Got feedback? Open an issue or start a discussion in our repo.

Learn more

  • Explore the MCP Gateway Project: Visit the GitHub repository for code, examples, and contribution guidelines.
  • Dive into Smart Search and Tool Composition: Read the full documentation to understand how these features enable dynamic, efficient agent workflows.
  • Learn more about Docker’s MCP Solutions

Docker Desktop 4.43: Expanded Model Runner, Reimagined MCP Catalog, MCP Server Submissions, and Smarter Gordon

Par : Yiwen Xu
3 juillet 2025 à 14:57

Docker Desktop 4.43 just rolled out a set of powerful updates that simplify how developers run, manage, and secure AI models and MCP tools. 

Model Runner now includes better model management, expanded OpenAI API compatibility, and fine-grained controls over runtime behavior. The improved MCP Catalog makes it easier to discover and use MCP servers, and now supports submitting your own MCP servers! Meanwhile, the MCP Toolkit streamlines integration with VS Code and GitHub, including built-in OAuth support for secure authentication. Gordon, Docker’s AI agent, now supports multi-threaded conversations with faster, more accurate responses. And with the new Compose Bridge, you can convert local compose.yaml files into Kubernetes configuration in a single command. 

Together, these updates streamline the process of building agentic AI apps and offer a preview of Docker’s ongoing efforts to make it easier to move from local development to production.

1920x1080 4.43 docker desktop release1

New model management commands and expanded OpenAI API support in Model Runner

This release includes improvements to the user interface of the Docker Model Runner, the inference APIs, and the inference engine under the hood.

Starting with the user interface, developers can now inspect models (including those already pulled from Docker Hub and those available remotely in the AI catalog) via model cards available directly in Docker Desktop. Below is a screenshot of what the model cards look like:

dd443 fig 1

Figure 1: View model cards directly in Docker Desktop to get an instant overview of all variants in the model family and their key features.

In addition to the GUI changes, the docker model command adds three new subcommands to  help developers inspect, monitor, and manage models more effectively:

  • docker model ps: Show which models are currently loaded into memory
  • docker model df: Check disk usage for models and inference engines
  • docker model unload: Manually unload a model from memory (before its idle timeout)

For WSL2 users who enable Docker Desktop integration, all of the docker model commands are also now available from their WSL2 distros, making it easier to work with models without changing your Linux-based workflow.

On the API side, Model Runner now offers additional OpenAI API compatibility and configurability. Specifically, tools are now supported with {“stream”: “true”}, making agents built on Docker Model Runner more dynamic and responsive. Model Runner’s API endpoints now support OPTIONS calls for better compatibility with existing tooling. Finally, developers can now configure CORS origins in the Model Runner settings pane, offering better compatibility and control over security. 

dd443 fig 2

Figure 2: CORS Allowed Origins are now configurable in Docker Model Runner settings, giving developers greater flexibility and control.

For developers who need fine-grained control over model behavior, we’re also introducing the ability to set a model’s context size and even the runtime flags for the inference engine via Docker Compose, for example:

models:
  gemma3:
    model: ai/gemma3
    context_size: 8192
    runtime_flags:  ["--no-prefill-assistant"]

In this example, we’re using the (optional) context-size and runtime-flags parameters to control the behavior of the inference engine underneath. In this case, the associated runtime is the default (llama.cpp), and you can find a list of flags here. Certain flags may override the stable default configuration that we ship with Docker Desktop, but we want users to have full control over the inference backend. It’s also worth noting that a particular model architecture may limit the maximum context size. You can find information about maximum context lengths on the associated model cards on Docker Hub.

Under the hood, we’ve focused on improving stability and usability. We now have better error reporting in the event that an inference process crashes, along with more aggressive eviction of crashed engine processes. We’ve also enhanced the Docker CE Model Runner experience with better handling of concurrent usage and more robust support for model providers in Compose on Docker CE.

MCP Catalog & Toolkit: Secure, containerized AI tools at scale

New and redesigned MCP Catalog 

Docker’s MCP Catalog now features an improved experience, making it easier to search, discover, and identify the right MCP servers for your workflows. You can still access the catalog through Docker Hub or directly from the MCP Toolkit in Docker Desktop, and now, it’s also available via a dedicated web link for even faster access. 

Screenshot 2025 06 26 at 16 56 08 Docker MCP Marketplace

Figure 3: Quickly find the right MCP server for your agentic app and use the new Catalog to browse by specific use cases.

The MCP Catalog currently includes over 100 verified, containerized tools, with hundreds more on the way. Unlike traditional npx or uvx workflows that execute code directly on your host, every MCP server in the catalog runs inside an isolated Docker container. Each one includes cryptographic signatures, a Software Bill of Materials (SBOM), and provenance attestations. 

This approach eliminates the risks of running unverified code and ensures consistent, reproducible environments across platforms. Whether you need database connectors, API integrations, or development tools, the MCP Catalog provides a trusted, scalable foundation for AI-powered development workflows that move the entire ecosystem away from risky execution patterns toward production-ready, containerized solutions.

Submit your MCP Server to the Docker MCP Catalog

We’re launching a new submission process, giving developers flexible options to contribute by following the process here.  Developers can choose between two options: Docker-Built and Community-Built servers. 

Docker-Built Servers 

When you see “Built by Docker,” you’re getting our complete security treatment. We control the entire build pipeline, providing cryptographic signatures, SBOMs, provenance attestations, and continuous vulnerability scanning.

Community-Built Servers 

These servers are packaged as Docker images by their developers. While we don’t control their build process, they still benefit from container isolation, which is a massive security improvement over direct execution.

Docker-built servers demonstrate the gold standard for security, while community-built servers ensure we can scale rapidly to meet developer demand. Developers can change their mind after submitting a community-built server and opt to resubmit it as a Docker-built server. 

Get your MCP server featured in the Docker MCP Catalog today and reach over 20 million developers. Learn more about our new MCP Catalog in our announcement blog and get insights on best practices on building, running, and testing MCP servers.  Join us in building the largest library of secure, containerized MCP servers! .

MCP Toolkit adds OAuth support and streamlined Integration with GitHub and VS Code

Many MCP servers’ credentials are passed as plaintext environment variables, exposing sensitive data and increasing the risk of leaks. The MCP Toolkit eliminates that risk with secure credential storage, allowing clients to authenticate with MCP servers and third-party services without hardcoding secrets. We’re taking it a step further with OAuth support, starting with the most widely used developer tool, GitHub. This will make it even easier to integrate secure authentication into your development workflow.

dd443 fig 4

Figure 4: OAuth is now supported for the GitHub MCP server.

To set up your GitHub MCP server, go to the OAuth tab, connect your GitHub account, enable the server, and authorize OAuth for secure authentication.

dd443 fig 5

Figure 5: Go to the configurations tab of the GitHub MCP servers to enable OAuth for secure authentication

The MCP Toolkit allows you to connect MCP servers to any MCP client, with one-click connection to popular ones such as Claude and Cursor. We are also making it easier for developers to connect to VSCode with the docker mcp client connect vscode command. When run in your project’s root folder, it creates an mcp.json configuration file in your .vscode folder. 

dd443 fig 6

Figure 6: Connect to VS Code via MCP commands in the CLI.

Additionally, you can also configure the MCP Toolkit as a global MCP server available to VSCode by adding the following config to your user settings. Check out this doc for more details. Once connected, you can leverage GitHub Copilot in agent mode with full access to your repositories, issues, and pull requests.

"mcp": {
  "servers": {
    "MCP_DOCKER": {
      "command": "docker",
      "args": [
        "mcp",
        "gateway",
        "run"
      ],
      "type": "stdio"
    }
  }
}

Gordon gets smarter: Multi-threaded conversations and 5x faster performance

Docker’s AI Agent Gordon just got a major upgrade: multi-threaded conversation support. You can now run multiple distinct conversations in parallel and switch between topics like debugging a container issue in one thread and refining a Docker Compose setup in another, without losing context. Gordon keeps each thread organized, so you can pick up any conversation exactly where you left off.

Gordon’s new multi-threaded capabilities work hand-in-hand with MCP tools, creating a powerful boost for your development workflow. Use Gordon alongside your favorite MCP tools to get contextual help while keeping conversations organized by task. No more losing focus to context switching!

dd443 fig 7

Figure 7: Gordon’s new multi-threaded support cuts down on context switching and boosts productivity.

We’ve also rolled out major performance upgrades, Gordon now responds 5x faster and delivers more accurate, context-aware answers. With improved understanding of Docker-specific commands, configurations, and troubleshooting scenarios, Gordon is smarter and more helpful than ever!

Compose Bridge: Seamlessly go from local Compose to Kubernetes 

We know that developers love Docker Compose for managing local environments—it’s simple and easy to understand. We’re excited to introduce Compose Bridge to Docker Desktop. This new powerful feature helps you transform your local compose.yaml into Kubernetes configuration with a single command.

Translate Compose to Kubernetes in seconds

Compose Bridge gives you a streamlined, flexible way to bring your Compose application to Kubernetes. With smart defaults and options for customization, it’s designed to support both simple setups and complex microservice architectures.

All it takes is:

docker compose bridge convert

And just like that, Compose Bridge generates the following Kubernetes resources from your Compose file:

  • A Namespace to isolate your deployment
  • A ConfigMap for every Compose config entry
  • Deployments for running and scaling your services
  • Services for exposed and published ports—including LoadBalancer services for host access
  • Secrets for any secrets in your Compose file (encoded for local use)
  • NetworkPolicies that reflect your Compose network topology
  • PersistentVolumeClaims using Docker Desktop’s hostpath storage

This approach replicates your local dev environment in Kubernetes quickly and accurately, so you can test in production-like conditions, faster.

Built-in flexibility and upcoming enhancements

Need something more customized? Compose Bridge supports advanced transformation options so you can tweak how services are mapped or tailor the resulting configuration to your infrastructure.

And we’re not stopping here—upcoming releases will allow Compose Bridge to generate Kubernetes config based on your existing cluster setup, helping teams align development with production without rewriting manifests from scratch.

Get started

You can start using Compose Bridge today:

  1. Download or update Docker Desktop
  2. Open your terminal and run:
  3. Review the documentation to explore customization options
docker compose bridge convert

Conclusion 

Docker Desktop 4.43 introduces practical updates for developers building at the intersection of AI and cloud-native apps. Whether you’re running local models, finding and running secure MCP servers, using Gordon for multi-threaded AI assistance, or converting Compose files to Kubernetes, this release cuts down on complexity so you can focus on shipping. From agentic AI projects to scaling workflows from local to production, you’ll get more control, smoother integration, and fewer manual steps throughout.

Learn more

Docker Desktop 4.41: Docker Model Runner supports Windows, Compose, and Testcontainers integrations, Docker Desktop on the Microsoft Store

Par : Yiwen Xu
29 avril 2025 à 20:20

Big things are happening in Docker Desktop 4.41! Whether you’re building the next AI breakthrough or managing development environments at scale, this release is packed with tools to help you move faster and collaborate smarter. From bringing Docker Model Runner to Windows (with NVIDIA GPU acceleration!), Compose and Testcontainers, to new ways to manage models in Docker Desktop, we’re making AI development more accessible than ever. Plus, we’ve got fresh updates for your favorite workflows — like a new Docker DX Extension for Visual Studio Code, a speed boost for Mac users, and even a new location for Docker Desktop on the Microsoft Store. Also, we’re enabling ACH transfer as a payment option for self-serve customers. Let’s dive into what’s new!

1920x1080 4.41 docker desktop release

Docker Model Runner now supports Windows, Compose & Testcontainers

This release brings Docker Model Runner to Windows users with NVIDIA GPU support. We’ve also introduced improvements that make it easier to manage, push, and share models on Docker Hub and integrate with familiar tools like Docker Compose and Testcontainers. Docker Model Runner works with Docker Compose projects for orchestrating model pulls and injecting model runner services, and Testcontainers via its libraries. These updates continue our focus on helping developers build AI applications faster using existing tools and workflows. 

In addition to CLI support for managing models, Docker Desktop now includes a dedicated “Models” section in the GUI. This gives developers more flexibility to browse, run, and manage models visually, right alongside their containers, volumes, and images.

blog DMS Models

Figure 1: Easily browse, run, and manage models from Docker Desktop

Further extending the developer experience, you can now push models directly to Docker Hub, just like you would with container images. This creates a consistent, unified workflow for storing, sharing, and collaborating on models across teams. With models treated as first-class artifacts, developers can version, distribute, and deploy them using the same trusted Docker tooling they already use for containers — no extra infrastructure or custom registries required.

docker model push <model>

The Docker Compose integration makes it easy to define, configure, and run AI applications alongside traditional microservices within a single Compose file. This removes the need for separate tools or custom configurations, so teams can treat models like any other service in their dev environment.

blog New Help

Figure 2: Using Docker Compose to declare services, including running AI models

Similarly, the Testcontainers integration extends testing to AI models, with initial support for Java and Go and more languages on the way. This allows developers to run applications and create automated tests using AI services powered by Docker Model Runner. By enabling full end-to-end testing with Large Language Models, teams can confidently validate application logic, their integration code, and drive high-quality releases.

String modelName = "ai/gemma3";
DockerModelRunnerContainer modelRunnerContainer = new DockerModelRunnerContainer()
       .withModel(modelName);
modelRunnerContainer.start();


OpenAiChatModel model = OpenAiChatModel.builder()
       .baseUrl(modelRunnerContainer.getOpenAIEndpoint())
       .modelName(modelName)
       .logRequests(true)
       .logResponses(true)
       .build();


String answer = model.chat("Give me a fact about Whales.");
System.out.println(answer);

Docker DX Extension in Visual Studio: Catch issues early, code with confidence 

The Docker DX Extension is now live on the Visual Studio Marketplace. This extension streamlines your container development workflow with rich editing, linting features, and built-in vulnerability scanning. You’ll get inline warnings and best-practice recommendations for your Dockerfiles, powered by Build Check — a feature we introduced last year. 

It also flags known vulnerabilities in container image references, helping you catch issues early in the dev cycle. For Bake files, it offers completion, variable navigation, and inline suggestions based on your Dockerfile stages. And for those managing complex Docker Compose setups, an outline view makes it easier to navigate and understand services at a glance.

blog Docker DX

Figure 3: Docker DX Extension in Visual Studio provides actionable recommendations for fixing vulnerabilities and optimizing Dockerfiles

Read more about this in our announcement blog and GitHub repo. Get started today by installing Docker DX – Visual Studio Marketplace 

MacOS QEMU virtualization option deprecation

The QEMU virtualization option in Docker Desktop for Mac will be deprecated on July 14, 2025

With the new Apple Virtualization Framework, you’ll experience improved performance, stability, and compatibility with macOS updates as well as tighter integration with Apple Silicon architecture. 

What this means for you:

  • If you’re using QEMU as your virtualization backend on macOS, you’ll need to switch to either Apple Virtualization Framework (default) or Docker VMM (beta) options.
  • This does NOT affect QEMU’s role in emulating non-native architectures for multi-platform builds.
  • Your multi-architecture builds will continue to work as before.

For complete details, please see our official announcement

Introducing Docker Desktop in the Microsoft Store

Docker Desktop is now available for download from the Microsoft Store! We’re rolling out an EXE-based installer for Docker Desktop on Windows. This new distribution channel provides an enhanced installation and update experience for Windows users while simplifying deployment management for IT administrators across enterprise environments.

Key benefits

For developers:

  • Automatic Updates: The Microsoft Store handles all update processes automatically, ensuring you’re always running the latest version without manual intervention.
  • Streamlined Installation: Experience a more reliable setup process with fewer startup errors.
  • Simplified Management: Manage Docker Desktop alongside your other applications in one familiar interface.

For IT admins: 

  • Native Intune MDM Integration: Deploy Docker Desktop across your organization with Microsoft’s native management tools.
  • Centralized Deployment Control: Roll out Docker Desktop more easily through the Microsoft Store’s enterprise distribution channels.
  • Automatic Updates Regardless of Security Settings: Updates are handled automatically by the Microsoft Store infrastructure, even in organizations where users don’t have direct store access.
  • Familiar Process: The update mechanism maps to the widget command, providing consistency with other enterprise software management tools.

This new distribution option represents our commitment to improving the Docker experience for Windows users while providing enterprise IT teams with the management capabilities they need.

Unlock greater flexibility: Enable ACH transfer as a payment option for self-serve customers

We’re focused on making it easier for teams to scale, grow, and innovate. All on their own terms. That’s why we’re excited to announce an upgrade to the self-serve purchasing experience: customers can pay via ACH transfer starting on 4/30/25.

Historically, self-serve purchases were limited to credit card payments, forcing many customers who could not use credit cards into manual sales processes, even for small seat expansions. With the introduction of an ACH transfer payment option, customers can choose the payment method that works best for their business. Fewer delays and less unnecessary friction.

This payment option upgrade empowers customers to:

  • Purchase more independently without engaging sales
  • Choose between credit card or ACH transfer with a verified bank account

By empowering enterprises and developers, we’re freeing up your time, and ours, to focus on what matters most: building, scaling, and succeeding with Docker.

Visit our documentation to explore the new payment options, or log in to your Docker account to get started today!

Wrapping up 

With Docker Desktop 4.41, we’re continuing to meet developers where they are — making it easier to build, test, and ship innovative apps, no matter your stack or setup. Whether you’re pushing AI models to Docker Hub, catching issues early with the Docker DX Extension, or enjoying faster virtualization on macOS, these updates are all about helping you do your best work with the tools you already know and love. We can’t wait to see what you build next!

Learn more

Docker Desktop 4.40: Model Runner to run LLMs locally, more powerful Docker AI Agent, and expanded AI Tools Catalog

Par : Yiwen Xu
1 avril 2025 à 16:46

At Docker, we’re focused on making life easier for developers and teams building high-quality applications, including those powered by generative AI. That’s why, in the Docker Desktop 4.40 release, we’re introducing new tools that simplify GenAI app development and support secure, scalable development. 

Keep reading to find updates on new tooling like Model Runner and a more powerful Docker AI Agent with MCP capabilities. Plus, with the AI Tool Catalog, teams can now easily build smarter AI-powered applications and agents with MCPs. And with Docker Desktop Setting Reporting, admins now get greater visibility into compliance and policy enforcement.

1920x1080 4.40 docker desktop release

Docker Model Runner (Beta): Bringing local AI model execution to developers 

Now in beta with Docker Desktop 4.40, Docker Model Runner makes it easier for developers to run AI models locally. No extra setup, no jumping between tools, and no need to wrangle infrastructure. This first iteration is all about helping developers quickly experiment and iterate on models right from their local machines.

The beta includes three core capabilities:

  • Local model execution, right out of the box
  • GPU acceleration on Apple Silicon for faster performance
  • Standardized model packaging using OCI Artifacts

Powered by llama.cpp and accessible via the OpenAI API, the built-in inference engine makes running models feel as simple as running a container. On Mac, Model Runner uses host-based execution to tap directly into your hardware — speeding things up with zero extra effort.

Models are also packaged as OCI Artifacts, so you can version, store, and ship them using the same trusted registries and CI/CD workflows you already use. Check out our docs for more detailed info!

blog Model runner 1200px

Figure 1: Using Docker Model Runner and CLI commands to experiment with models locally

This release lays the groundwork for what’s ahead: support for additional platforms like Windows with GPU, the ability to customize and publish your own models, and deeper integration into the development loop. We’re just getting started with Docker Model Runner and look forward to sharing even more updates and enhancements in the coming weeks.

Docker AI Agent: Smarter and more powerful with MCP integration + AI Tool Catalog

Our vision for the Docker AI Agent is simple: be context-aware, deeply knowledgeable, and available wherever developers build. With this release, we’re one step closer! The Docker AI Agent is now even more capable, making it easier for developers to tap into the Docker ecosystem and streamline their workflows beyond Docker. 

Your trusted AI Agent for all things Docker 

The Docker AI agent now has built-in support for many new popular developer capabilities like:

  • Running shell commands
  • Performing Git operations
  • Downloading resources
  • Managing local files

Thanks to a Docker Scout integration, we also now support other tools from the Docker ecosystem, such as performing security analysis on your Dockerfiles or images. 

Expanding the Docker AI Agent beyond Docker 

The Docker AI Agent now fully embraces the Model Context Protocol (MCP). This new standard for connecting AI agents and models to external data and tools makes them more powerful and tailored to specific needs. In addition to acting as an MCP client, many of Docker AI Agent’s capabilities are now exposed as MCP Servers. This means you can interact with the agent in Docker Desktop GUI or CLI or your favorite client, such as Claude Desktop and Cursor.

blog gordon toolbox 1200px

Figure 2: Extending Docker AI Agent’s capabilities with many tools, including the MCP Catalog. 

AI Tool Catalog: Your launchpad for experimenting with MCP servers

Thanks to the AI Tool Catalog extension in Docker Desktop, you can explore different MCP servers and seamlessly connect the Docker AI agent to other tools or other LLMs to the Docker ecosystem. No more manually configuring multiple MCP servers! We’ve also added secure handling and injection of MPC servers’ secrets, such as API keys, to simplify log-ins and credential management.

The AI Tool Catalog includes containerized servers that have been pushed to Docker Hub, and we’ll continue to expand them. If you’re working in this space or have an MCP server that you’d like to distribute, please reach out in our public GitHub repo. To install the AI Tool Catalog, go to the extensions menu of Docker Desktop or use this for installation.

blog MCP 1200px

Figure 3: Explore and discover MCP servers in the AI Tools Catalog extension in Docker Desktop

Bring compliance into focus with Docker Desktop Setting Reporting

Building on the Desktop Settings Management capabilities introduced in Docker Desktop 4.36, Docker Desktop 4.40 brings robust compliance reporting for Docker Business customers. This new powerful feature gives administrators comprehensive visibility into user compliance with assigned settings policies across the organization.

Key benefits

  • Real-time compliance tracking: Easily monitor which users are compliant with their assigned settings policies. This allows administrators to quickly identify and address non-compliant systems and users.
  • Streamlined troubleshooting: Detailed compliance status information helps administrators diagnose why certain users might be non-compliant, reducing resolution time and IT overhead.
blog Desktop settings

Figure 4: Desktop settings reporting provides an overview of policy assignment and compliance status, helping organizations stay compliant. 

Get started with Docker Desktop Setting Reporting

The Desktop Setting Reporting dashboard is currently being rolled out through Early Access. Administrators can see which settings policies are assigned to each user and whether those policies are being correctly applied.

Soon, administrators will be able to access the reporting dashboard by navigating to the Admin Console > Docker Desktop > Reporting. The dashboard provides a clear view of all users’ compliance status, with options to:

  • Search by username or email address
  • Filter by assigned policies
  • Toggle visibility of compliant users to focus on potential issues
  • View detailed compliance information for specific users
  • Download comprehensive compliance data as a CSV file

The dashboard also provides targeted resolution steps for non-compliant users to help administrators quickly address issues and ensure organizational compliance.

This new reporting capability underscores Docker’s commitment to providing enterprise-grade management tools that simplify administration while maintaining security and compliance across diverse development environments. Learn more about Desktop settings reporting here.

Wrapping up 

Docker is expanding its AI tooling to simplify application development and improve team workflows. New additions like Model Runner, the Docker AI Agent with MCP server and client support, and the AI Tool Catalog extension in Docker Desktop help streamline how developers build with AI. We continue to make enterprise tools more useful and robust, giving admins better visibility into compliance and policy enforcement through Docker Desktop Settings Reporting. We can’t wait to see what you build next!

Learn more

Desktop 4.39: Smarter AI Agent, Docker Desktop CLI in GA, and Effortless Multi-Platform Builds

Par : Yiwen Xu
6 mars 2025 à 18:29

Developers need a fast, secure, and reliable way to build, share, and run applications — and Docker makes that easy. With the Docker Desktop 4.39 release, we’re excited to announce a few developer productivity enhancements including Docker AI Agent with Model Context Protocol (MCP) and Kubernetes support, general availability of Docker Desktop CLI, and `platform` flag support for more seamless multi-platform image management.

1920x1080 4.39 docker desktop release

Docker AI Agent: Smarter, more capable, and now with MCP & Kubernetes

In our last release, we introduced the Docker AI Agent in beta as an AI-powered, context-aware assistant built into Docker Desktop and the CLI. It simplifies container management, troubleshooting, and workflows with guidance and automation. And the response has been incredible: a 9x increase in weekly active users. With each Docker Desktop release, we’re making Docker AI Agent smarter, more helpful, and more versatile across developer container workflows. And if you’re using Docker for GitHub Copilot, you’ll get these upgrades automatically — so you’re always working with the latest and greatest.

Docker AI Agent now supports Model Context Protocol (MCP) and Kubernetes, along with usability upgrades like multiline prompts and easy copying. The agent can now also interact with the Docker Engine to list and clean up containers, images, and volumes. Plus, with access to the Kubernetes cluster, Docker AI Agent can list namespaces, deploy and expose, for example, an Nginx service, and analyze pod logs. 

How Docker AI Agent Uses MCP

MCP is a new standard for connecting AI agents and models to external data and tools. It lets AI-powered apps and agents retrieve data and information from external sources, perform operations with third-party services, and interact with local filesystems, unlocking new and expanded capabilities. MCP works by introducing the concept of MCP clients and MCP Servers, this way clients request resources and the servers handle the request and perform the requested action.

The Docker AI Agent acts as an MCP client and can interact with MCP servers running as containers. When running the docker ai command in the terminal or in the Docker Desktop AI Agent window to ask a question, the agent looks for a gordon-mcp.yml file in the working directory for a list of MCP servers that should be used when in that context. For example, as a specialist in all things Docker, Docker AI Agent can:

To make MCP adoption easier and more secure, Docker has collaborated with Anthropic to build container images for the reference implementations of MCP servers, available on Docker Hub under the mcp namespace. Check out our docs for examples of using MCP with Docker AI Agent. 

Containerizing apps in multiple popular languages: More coming soon

Docker AI Agent is also more capable, and can now support the containerization of applications in new programming languages including:

  • JavaScript/TypeScript applications using npm, pnpm, yarn and bun;
  • Go applications using Go modules;
  • Python applications using pip, poetry, and uv;
  • C# applications using nuget

Try it out — just ask, “Can you containerize my application?” 

Once the agent runs through steps such as determining the number of services in the project, the language, package manager, and relevant information for containerization, it’ll generate Docker-related assets. You’ll have an optimized Dockerfile, Docker Compose file, dockerignore file, and a README to jumpstart your application with Docker. 

More language and package manager support will be available soon!

Ask Gordon Containerize my app 1200x1000 1

Figure 1: Docker AI Agent helps with containerizing your app and shows steps of its work

No need to write scripts, just ask Docker AI Agent

The Docker AI Agent also comes with built-in capabilities such as interfacing with containers, images, and volumes. Instead of writing scripts, you can simply ask in natural language to perform complex operations.  For example, combining various servers, to do complex tasks such as finding and cleaning unused images.

Ask Gordon CLI Find me all the images2 1000x680 1

Figure 2: Finding and optimizing unused images storage with a simple ask to Docker AI Agent

Docker Desktop CLI: Now in GA

With the Docker Desktop 4.37 release, we introduced the Docker Desktop CLI controller in Beta, a command-line tool to manage Docker Desktop. In addition to performing tasks like starting, stopping, restarting, and checking the status of Docker Desktop directly from the command line, developers can also print logs and update to the latest version of Docker Desktop. 

Docker meets developers where they work — whether in the CLI or GUI. With the Docker Desktop CLI, developers can seamlessly switch between GUI and command-line workflows, tailoring their workflows to their needs. 

This feature lets you automate Docker Desktop operations in CI/CD pipelines, expedites troubleshooting directly from the terminal, and creates a smoother, distraction-free workflow. IT admins also benefit from this feature; for example, they can use these commands in automation scripts to manage updates. 

Improve multi-platform image management with the new --platform flag 

Containerized applications often need to run across multiple architectures, making efficient platform-specific image management essential. To simplify this, we’ve introduced a --platform flag for docker save, docker load, and docker history. This addition will let developers explicitly select and manage images for specific architectures like linux/amd64, linux/arm64, and more.

The new –platform flag gives you full control over environment variants when saving or loading. For example, exporting only the linux/arm64 version of an image is now as simple as running:

docker save --platform linux/arm64 -o my-image.tar my-app:latest

Similarly, docker load --platform linux/amd64 ensures that only the amd64 variant is imported from a multi-architecture archive, reducing ambiguity and improving cross-platform workflows. For debugging and optimization, docker history --platform provides detailed insights into the build history of a specific architecture.

These enhancements streamline multi-platform development by giving developers full control over how they build, store, and distribute images. 

Head over to our history, load, and save documentation to learn more! 

Wrapping up 

Docker Desktop 4.39 reinforces our commitment to streamlining the developer experience. With Docker AI Agent’s expanded support for MCP, Kubernetes, built-in capabilities of interacting with containers, and more, developers can simplify and customize their workflow. They can also seamlessly switch between the GUI and command-line, while creating automations with the Docker Desktop CLI. Plus, with the new --platform flag, developers now have full control over how they build, store, and distribute images. 

Less friction, more flexibility — we can’t wait to see what you build next!

Authenticate and update today to receive your subscription level’s newest Docker Desktop features.

Learn more

Docker Desktop 4.38: New AI Agent, Multi-Node Kubernetes, and Bake in GA

Par : Yiwen Xu
5 février 2025 à 21:42

At Docker, we’re committed to simplifying the developer experience and empowering enterprises to scale securely and efficiently. With the Docker Desktop 4.38 release, teams can look forward to improved developer productivity and enterprise governance. 

We’re excited to announce the General Availability of Bake, a powerful feature for optimizing build performance and multi-node Kubernetes testing to help teams “shift left.” We’re also expanding availability for several enterprise features designed to boost operational efficiency. And last but not least, Docker AI Agent (formerly Project: Agent Gordon) is now in Beta, delivering intelligent, real-time Docker-related suggestions across Docker CLI, Desktop, and Hub. It’s here to help developers navigate Docker concepts, fix errors, and boost productivity.

1920x1080 4.38 docker desktop release

Docker’s AI Agent boosts developer productivity  

We’re thrilled to introduce Docker AI Agent (also known as Project: Agent Gordon) — an embedded, context-aware assistant seamlessly integrated into the Docker suite. Available within Docker Desktop and CLI, this innovative agent delivers real-time, tailored guidance for tasks like container management and Docker-specific troubleshooting — eliminating disruptive context-switching. Docker AI agent can be used for every Docker-related concept and technology, whether you’re getting started, optimizing an existing Dockerfile or Compose file, or understanding Docker technologies in general. By addressing challenges precisely when and where developers encounter them, Docker AI Agent ensures a smoother, more productive workflow. 

The first iteration of Docker’s AI Agent is now available in Beta for all signed-in users. The agent is disabled by default, so user activation is required. Read more about Docker’s New AI Agent and how to use it to accelerate developer velocity here

blog DD AI agent 1110x806 1

Figure 1: Asking questions to Docker AI Agent in Docker Desktop

Simplify build configurations and boost performance with Docker Bake

Docker Bake is an orchestration tool that simplifies and speeds up Docker builds. After launching as an experimental feature, we’re thrilled to make it generally available with exciting new enhancements.

While Dockerfiles are great for defining build steps, teams often juggle docker build commands with various options and arguments — a tedious and error-prone process. Bake changes the game by introducing a declarative file format that consolidates all options and image dependencies (also known as targets) in one place. No more passing flags to every build command! Plus, Bake’s ability to parallelize and deduplicate work ensures faster and more efficient builds.

Key benefits of Docker Bake

  • Simplicity: Abstract complex build configurations into one simple command.
  • Flexibility: Write build configurations in a declarative syntax, with support for custom functions, matrices, and more.
  • Consistency: Share and maintain build configurations effortlessly across your team.
  • Performance: Bake parallelizes multi-image workflows, enabling faster and more efficient builds.

Developers can simplify multi-service builds by integrating Bake directly into their Compose files — Bake supports Compose files natively. It enables easy, efficient building of multiple images from a single repository with shared configurations. Plus, it works seamlessly with Docker Build Cloud locally and in CI. With Bake-optimized builds as the foundation, developers can achieve more efficient Docker Build Cloud performance and faster builds.

Learn more about streamlining build configurations, boosting performance, and improving team workflows with Bake in our announcement blog

Shift Left with Multi-Node Kubernetes testing in Docker Desktop

In today’s complex production environments, “shifting left”  is more essential than ever. By addressing concerns earlier in the development cycle, teams reduce costs and simplify fixes, leading to more efficient workflows and better outcomes. That’s why we continue to bring new features and enhancements to integrate feedback directly into the developer’s inner loop


Docker Desktop now includes Multi-Node Kubernetes integration, enabling easier and extensive testing directly on developers’ machines. While single-node clusters allow for quick verification of app deployments, they fall short when it comes to testing resilience and handling the complex, unpredictable issues of distributed systems. To tackle this, we’re updating our Kubernetes distribution with kind — a lightweight, fast, and user-friendly solution for local test and multi-node cluster simulations.

blog Multi Node K8 1083x775 1

Figure 2: Selecting Kubernetes version and cluster number for testing

Key Benefits:

  • Multi-node cluster support: Replicate a more realistic production environment to test critical features like node affinity, failover, and networking configurations.
  • Multiple Kubernetes versions: Easily test across different Kubernetes versions, which is a must for validating migration paths.
  • Up-to-date maintenance: Since kind is an actively maintained open-source project, developers can update to the latest version on demand without waiting for the next Docker Desktop release.

Head over to our documentation to discover how to use multi-node Kubernetes clusters for local testing and simulation.

General availability of administration features for Docker Business subscription

With the Docker Desktop 4.36 release, we introduced Beta enterprise admin tools to streamline administration, improve security, and enhance operational efficiency. And the feedback from our Early Access Program customers has been overwhelmingly positive. 

For instance, enforcing sign-in with macOS configuration files and across multiple organizations makes deployment easier and more flexible for large enterprises. Also, the PKG installer simplifies managing large-scale Docker Desktop deployments on macOS by eliminating the need to convert DMG files into PKG first.

Today, the features below are now available to all Docker Business customers.  

Looking ahead, Docker is dedicated to continue expanding enterprise administration capabilities. Stay tuned for more announcements!

Wrapping up 

Docker Desktop 4.38 reinforces our commitment to simplifying the developer experience while equipping enterprises with robust tools. 

With Bake now in GA, developers can streamline complex build configurations into a single command. The new Docker AI Agent offers real-time, on-demand guidance within their preferred Docker tools. Plus, with Multi-node Kubernetes testing in Docker Desktop, they can replicate realistic production environments and address issues earlier in the development cycle. Finally, we made a few new admin tools available to all our Business customers, simplifying deployment, management, and monitoring. 

We look forward to how these innovations accelerate your workflows and supercharge your operations! 

Learn more

How Docker Streamlines the  Onboarding Process and Sets Up Developers for Success

Par : Yiwen Xu
22 janvier 2025 à 14:00

Nearly half (45%) of developers say they don’t have enough time for learning and development, according to a developer experience research study by Harness and Wakefield Research. Additionally, developer onboarding is a slow and painful process, with 71% of executive buyers saying that onboarding new developers takes at least two months. 

To accelerate innovation and bring products to market faster, organizations must empower developers with robust support and intuitive guardrails, enabling them to succeed within a structured yet flexible environment. That’s where Docker fits in: We help developers onboard quickly and help organizations set up the right guardrails to give developers the flexibility to innovate within the boundaries of company policies. 

2400x1260 docker evergreen logo blog C 1

Setting up developer teams for success 

Docker is recognized as one of the most used, desired, and admired developer tools, making it an essential component of any development team’s toolkit. For developers who are new to Docker, you can quickly get them up and running with Docker’s integrated development workflows, verified secure content, and accessible learning resources and community support.

Streamlined developer onboarding

When new developers join a team, Docker Desktop can significantly reduce the time and effort required to set up their development environments. Docker Desktop integrates seamlessly with popular IDEs, such as Visual Studio Code, allowing developers to containerize directly within familiar tools, accelerating learning within their usual workflows. Docker Extensions expand Docker Desktop’s capabilities and establish new functionalities, integrating developers’ favorite development tools into their application development and deployment workflows. 

Developers can also use Docker for GitHub Copilot for seamless onboarding with assistance for containerizing applications, generating Docker assets, and analyzing project vulnerabilities. In fact, the Docker extension is a top choice among developers in GitHub Copilot’s extension leaderboard, as highlighted by Visual Studio Magazine.

Docker Build Cloud integrates with Docker Compose and CI workflows, making it a seamless transition for dev teams. Verified content on Docker Hub gives developers preconfigured, trusted images, reducing setup time and ensuring a secure foundation as they onboard onto projects. 

Docker Scout provides actionable insights and recommendations, allowing developers to enhance their container security awareness, scan for vulnerabilities, and improve security posture with real-time feedback. And, Testcontainers Cloud lets developers run reliable integration tests, with real dependencies defined in code. With these tools, developers can be confident about delivering high-quality and reliable apps and experiences in production.  

Continuous learning with accessible knowledge resources

Continuous learning is a priority for Docker, with a wide range of accessible resources and tools designed to help developers deepen their knowledge and stay current in their containerization journey.

Docker Docs offers beginner-friendly guides, tutorials, and AI tools to guide developers through foundational concepts, empowering them to quickly build their container skills. Our collection of guides takes developers step by step to learn how Docker can optimize development workflows and how to use it with specific languages, frameworks, or technologies.

Docker Hub’s AI Catalog empowers developers to discover, pull, and integrate AI models into their workflows, bridging the gap between innovation and implementation. 

Docker also offers regular webinars and tech talks that help developers stay updated on new features and best practices and provide a platform to discuss real-world challenges. If you’re a Docker Business customer, you can even request additional, customized training from our Docker experts. 

Docker’s partnerships with educational platforms and organizations, such as Udemy Training and LinkedIn Learning, ensure developers have access to comprehensive training — from beginner tutorials to advanced containerization topics.

Docker’s global developer community

One of Docker’s greatest strengths is its thriving global developer community, offering organizations a unique advantage by connecting them with a wealth of shared expertise, resources, and real-world solutions.

With more than 20 million monthly active users, Docker’s community forums and events foster vibrant collaboration, giving developers access to a collective knowledge base that spans industries and expertise levels. Developers can ask questions, solve challenges, and gain insights from a diverse range of peers — from beginners to seasoned experts. Whether you’re troubleshooting an issue or exploring best practices, the Docker community ensures you’re never working in isolation.

A key pillar of this ecosystem is the Docker Captains program — a network of experienced and passionate Docker advocates who are leaders in their fields. Captains share technical knowledge through blog posts, videos, webinars, and workshops, giving businesses and teams access to curated expertise that accelerates onboarding and productivity.

Beyond forums and the Docker Captains program, Docker’s community-driven events, such as meetups and virtual workshops (Figure 1), provide developers with direct access to real-world use cases, innovative workflows, and emerging trends. These interactions foster continuous learning and help developers and their organizations keep pace with the ever-evolving software development landscape.

Photo showing a group of people sitting and standing in front of a large window at a Docker DevTools event.
Figure 1: Docker DevTools Day 1.0 Meetup in Singapore.

For businesses, tapping into Docker’s extensive community means access to a vast pool of knowledge, support, and inspiration, which is a critical asset in driving developer productivity and innovation.

Empowering developers with enhanced user management and security

In previous articles, we looked at how Docker simplifies complexity and boosts developer productivity (the right tool) and how to unlock efficiency with Docker for AI and cloud-native development (the right process).

To scale and standardize app development processes across the entire company, you also need to have the right guardrails in place for governance, compliance, and security, which is often handled through enterprise control and admin management tools. Ideally, organizations provide guardrails without being overly prescriptive and slowing developer productivity and innovation. 

Modern enterprises require a layered security approach, beginning with trusted content as the foundation for building robust and compliant applications. This approach gives your dev teams a good foundation for building securely from the start. 

Throughout the software development process, you need a secure platform. For regulated industries like finance and public sectors, this means fortified dev environments. Security vulnerability analysis and policy evaluation tools also help inform improvements and remediation. 

Additionally, you need enterprise controls and dashboards that ensure enterprise IT and security teams can confidently monitor and manage risk. 

Setting up the right guardrails 

Docker provides a number of admin tools to safeguard your software with integrated container security in the Docker Business plan. Our goal is to improve security and compliance of developer environments with minimal impact on developer experience or productivity. 

Centralized settings for improved dev environments security 

Docker provides developer teams with access to a vast library of trusted and certified application content, including Docker Official Images, Docker Verified Publisher, and Docker Trusted Open Source content. Coupled with advanced image and registry management rules — with tools like Image Access Management and Registry Access Management — you can ensure that your developers only use software that satisfies your company’s security policies. 

With a solid foundation to build securely from the start, your organization can further enhance security throughout the software development process. Docker ensures software supply chain integrity through vulnerability scanning and image analysis with Docker Scout. Rapid remediation capabilities paired with detailed CVE reporting help developers quickly find and fix vulnerabilities, resulting in speedy time to resolution.

Although containers are generally secure, container development tools still must be properly secured to reduce the risk of security breaches in the developer’s environment. Hardened Docker Desktop is an example of Docker’s fortified development environments with enhanced container isolation. It lets you enforce strict security settings and prevent developers and their containers from bypassing these controls. With air-gapped containers, you can further restrict containers from accessing network resources, limiting where data can be uploaded to or downloaded from.

Continuous monitoring and managing risks

With the Admin Console and Docker Desktop Insights, IT administrators and security teams can visualize and understand how Docker is used within their organizations and manage the implementation of organizational configurations and policies (Figure 2). 

These insights help teams streamline processes and improve efficiency. For example, you can enforce sign-in for developers who don’t sign in to an account associated with your organization. This step ensures that developers receive the benefits of your Docker subscription and work within the boundaries of the company policies. 

Screenshot of Docker Desktop Insights Dashboard containing numbers, information, and blue-colored graphs relating to Docker Desktop Users, Builds, Containers, Usage, and Images.
Figure 2: Docker Desktop Insights Dashboard provides information on product usage.

For business and engineering leaders, full visibility and governance over the development process help ensure compliance and mitigate risk while driving developer productivity. 

Unlock innovation with Docker’s development suite

Docker is the leading suite of tools purpose-built for cloud-native development, combining a best-in-class developer experience with enterprise-grade security and governance. With Docker, your organization can streamline onboarding, foster innovation, and maintain robust compliance — all while empowering your teams to deliver impactful solutions to market faster and more securely. 

Explore the Docker Business plan today and unlock the full potential of your development processes.

Learn more

Unlocking Efficiency with Docker for AI and Cloud-Native Development

Par : Yiwen Xu
8 janvier 2025 à 14:22

The need for secure and high quality software becomes more critical every day as the impact of vulnerabilities increases and related costs continue to rise. For example, flawed software cost the U.S. economy $2.08 trillion in 2020 alone, according to the Consortium for Information and Software Quality (CISQ). And, a software defect that might cost $100 to fix if found early in the development process can grow exponentially to $10,000 if discovered later in production. 

Docker helps you deliver secure, efficient applications by providing consistent environments and fast, reliable container management, building on best practices that let you discover and resolve issues earlier in the software development life cycle (SDLC).

2400x1260 docker evergreen logo blog E

Shifting left to ensure fewer defects

In a previous blog post, we talked about using the right tools, including Docker’s suite of products to boost developer productivity. Besides having the right tools, you also need to implement the right processes to optimize your software development and improve team productivity. 

The software development process is typically broken into two distinct loops, the inner and the outer loops. At Docker, we believe that investing in the inner loop is crucial. This means shifting security left and identifying problems as soon as you can. This approach improves efficiency and reduces costs by helping teams find and fix software issues earlier.

Using Docker tools to adopt best practices

Docker’s products help you adopt these best practices — we are focused on enhancing the software development lifecycle, especially around refining the inner loop. Products like Docker Desktop allow your dev team in the inner loop to run, test, code, and build everything fast and consistently. This consistency eliminates the “it works on my machine” issue, meaning applications behave the same in both development and production.  

Shifting left lets your dev team identify problems earlier in your software project lifecycle. When you detect issues sooner, you increase efficiency and help ensure secure builds and compliance. By shifting security left with Docker Scout, your dev teams can identify vulnerabilities sooner and help avoid issues down the road. 

Another example of shifting left involves testing — doing testing earlier in the process leads to more robust software and faster release cycles. This is when Testcontainers Cloud comes in handy because it enables developers to run reliable integration tests, with real dependencies defined in code. 

Accelerate development within the hybrid inner loop

We see more and more companies adopting the so-called hybrid inner loop, which combines the best of two worlds — local and cloud. The results provide greater flexibility for your dev teams and encourage better collaboration. For example, Docker Build Cloud uses the power of the cloud to speed up build time without sacrificing the local development experience that developers love. 

By using these Docker products across the software development life cycle, teams get quick feedback loops and faster issue resolution, ensuring a smooth development flow from inception to deployment. 

Simplifying AI application development

When you’re using the right tools and processes to accelerate your application delivery and maximize efficiency throughout your SDLC, processes that were once cumbersome become your new baseline, freeing up time for true innovation. 

Docker also helps accelerate innovation by simplifying AI/ML development. We are continually investing in AI to help your developers deliver AI-backed applications that differentiate your business and enhance competitiveness.

Docker AI tools

Docker’s GenAI Stack accelerates the incorporation of large language models (LLMs) and AI/ML into your code, enabling the delivery of AI-backed applications. All containers work harmoniously and are managed directly from Docker Desktop, allowing your team to monitor and adjust components without leaving their development environment. Deploying the GenAI Stack is quick and easy, and leveraging Docker’s containerization technology helps speed setup and simplify scaling as applications grow.

Earlier this year, we announced the preview of Docker Extension for GitHub Copilot. By standardizing best practices and enabling integrations with tools like GitHub Copilot, Docker empowers developers to focus on innovation, closing the gap from the first line of code to production.

And, more recently, we launched the Docker AI Catalog in Docker Hub. This new feature simplifies the process of integrating AI into applications by providing trusted and ready-to-use content supported by comprehensive documentation. Your dev team will benefit from shorter development cycles, improved productivity, and a more streamlined path to integrating AI into both new and existing applications.

Wrapping up

Docker products help you establish sound processes and practices related to shifting left and discovering issues earlier to avoid headaches down the road. This approach ultimately unlocks developer productivity, giving your dev team more time to code and innovate. Docker also allows you to quickly use AI to close knowledge gaps and offers trusted tools to build AI/ML applications and accelerate time to market. 

To see how Docker continues to empower developers with the latest innovations and tools, check out our Docker 2024 Highlights.

Learn about Docker’s updated subscriptions and find the ideal plan for your team’s needs.

Learn more

Docker Desktop 4.37: AI Catalog and Command-Line Efficiency

Par : Yiwen Xu
18 décembre 2024 à 17:37

Key features of the Docker Desktop 4.37 release include: 

The Docker Desktop 4.37 release brings incremental improvements that make developers’ lives easier by addressing common challenges in modern software development. With a focus on integrating AI resources and streamlining operational workflows, this update ensures developers can work faster, smarter, and more effectively.

1920x1080 4.37 docker desktop release

Unlocking AI-driven development with Docker AI Catalog integration

AI/ML development is exploding, but many developers face hurdles accessing prebuilt AI models and tools. They often need to search across multiple platforms, wasting valuable time piecing together resources and overcoming compatibility issues. This fragmentation slows down innovation and makes it harder for teams to bring AI-driven features into their applications.

With Docker Desktop 4.37, the AI Catalog in Docker Hub is now accessible directly through Docker Desktop. This seamless integration enables developers to discover, pull, and integrate AI models into their workflows effortlessly. Whether you’re incorporating pretrained machine learning models or exploring generative AI tools, Docker Desktop ensures these resources are just a click away.

Accessing AI Catalog from DD
Figure 1: AI Catalog in Docker Hub is now accessible directly through Docker Desktop.

Key benefits:

  • Streamlined discovery: You don’t need to leave your development environment to find AI tools. The AI Catalog is built into Docker Hub and can be immediately accessed from Docker Desktop.
  • Faster prototyping: By eliminating friction in accessing AI resources, teams can focus on building and iterating faster.
  • Enhanced compatibility: Docker’s containerized approach ensures AI models run consistently across environments, reducing setup headaches.

Whether you’re developing cutting-edge AI/ML applications or just beginning to experiment with AI tools, this integration empowers developers to innovate without distraction.

Command-line operations: Control Docker Desktop your way

For developers who automate workflows or work heavily in terminal environments, relying solely on graphical user interfaces (GUIs) can be limiting. Starting, stopping, or troubleshooting Docker Desktop often requires GUI navigation, which can disrupt automation pipelines and slow down power users.

Docker Desktop 4.37 introduces robust command-line capabilities for managing Docker Desktop itself. Developers can now perform essential tasks such as starting, stopping, restarting, and checking the status of Docker Desktop directly from the command line.

Key benefits:

  • Improved automation: Script Docker Desktop operations into CI/CD workflows, eliminating manual intervention.
  • Faster troubleshooting: Check the status and restart Docker Desktop without leaving the terminal, streamlining issue resolution.
  • Developer flexibility: A smoother, distraction-free experience for developers who prefer terminal-based workflows.

This new feature bridges the gap between GUI and command-line preferences, allowing developers to tailor their workflows to their needs.

Upgraded components: Keeping developers ahead

Docker Desktop 4.37 includes significant upgrades to its underlying components, bringing enhanced performance, security, and feature sets such as GPU- accelerated workflows. 

Here’s what’s new:

Bug fixes and stability improvements

At Docker, we aim to provide a stable and dependable development platform so developer teams can focus on creating, not troubleshooting. Docker Desktop 4.37 also addresses several key bugs and usability concerns:

  • Default disk usage limit: New installations now default to a 1TB disk limit, offering additional flexibility for developers with large containerized applications.
  • Loopback AF_VSOCK connections: Fixed to ensure container communication reliability.
  • CLI context reset fixes: Prevent unintended resets when restoring default settings.
  • Dashboard synchronization: Ensures consistent behavior between the Docker Desktop Dashboard and the Docker daemon after engine restarts.
  • Resource Saver mode stability: Resolves issues with mode reengagement, improving power efficiency for resource-conscious users.

Wrapping up 

Docker Desktop 4.37 offers a step forward in enabling developers to innovate. With a focus on AI-driven development and automation-friendly operations, this release aligns with the evolving needs of modern software teams.

Learn more

From Legacy to Cloud-Native: How Docker Simplifies Complexity and Boosts Developer Productivity

Par : Yiwen Xu
13 décembre 2024 à 13:30

Modern application development has evolved dramatically. Gone are the days when a couple of developers, a few machines, and some pizza were enough to launch an app. As the industry grew, DevOps revolutionized collaboration, and Docker popularized containerization, simplifying workflows and accelerating delivery. 

Later, DevSecOps brought security into the mix. Fast forward to today, and the demand for software has never been greater, with more than 750 million cloud-native apps expected by 2025.

This explosion in demand has created a new challenge: complexity. Applications now span multiple programming languages, frameworks, and architectures, integrating both legacy and modern systems. Development workflows must navigate hybrid environments — local, cloud, and everything in between. This complexity makes it harder for companies to deliver innovation on time and stay competitive. 

2400x1260 evergreen docker blog e

To overcome these challenges, you need a development platform that’s as reliable and ubiquitous as electricity or Wi-Fi — a platform that works consistently across diverse applications, development tools, and environments. Whether you’re just starting to move toward microservices or fully embracing cloud-native development, Docker meets your team where they are, integrates seamlessly into existing workflows, and scales to meet the needs of individual developers, teams, and entire enterprises.

Docker: Simplifying the complex

The Docker suite of products provides the tools you need to accelerate development, modernize legacy applications, and empower your team to work efficiently and securely. With Docker, you can:

  • Modernize legacy applications: Docker makes it easy to containerize existing systems, bringing them closer to modern technology stacks without disrupting operations.
  • Boost productivity for cloud-native teams: Docker ensures consistent environments, integrates with CI/CD workflows, supports hybrid development environments, and enhances collaboration

Consistent environments: Build once, run anywhere

Docker ensures consistency across development, testing, and production environments, eliminating the dreaded “works on my machine” problem. With Docker, your team can build applications in unified environments — whether on macOS, Windows, or Linux — for reliable code, better collaboration, and faster time to market.

With Docker Desktop, developers have a powerful GUI and CLI for managing containers locally. Integration with popular IDEs like Visual Studio Code allows developers to code, build, and debug within familiar tools. Built-in Kubernetes support enables teams to test and deploy applications on a local Kubernetes cluster, giving developers confidence that their code will perform in production as expected.

Integrated workflows for hybrid environments

Development today spans both local and cloud environments. Docker bridges the gap and provides flexibility with solutions like Docker Build Cloud, which speeds up build pipelines by up to 39x using cloud-based, multi-platform builders. This allows developers to focus more on coding and innovation, rather than waiting on builds.

Docker also integrates seamlessly with CI/CD tools like Jenkins, GitLab CI, and GitHub Actions. This automation reduces manual intervention, enabling consistent and reliable deployments. Whether you’re building in the cloud or locally, Docker ensures flexibility and productivity at every stage.

Team collaboration: Better together

Collaboration is central to Docker. With integrations like Docker Hub and other registries, teams can easily share container images and work together on builds. Docker Desktop features like Docker Debug and the Builds view dashboards empower developers to troubleshoot issues together, speeding up resolution and boosting team efficiency.

Docker Scout provides actionable security insights, helping teams identify and resolve vulnerabilities early in the development process. With these tools, Docker fosters a collaborative environment where teams can innovate faster and more securely.

Why Docker?

In today’s fast-paced development landscape, complexity can slow you down. Docker’s unified platform reduces complexity as it simplifies workflows, standardizes environments, and empowers teams to deliver software faster and more securely. Whether you’re modernizing legacy applications, bridging local and cloud environments, or building cutting-edge, cloud-native apps, Docker helps you achieve efficiency and scale at every stage of the development lifecycle.

Docker offers a unified platform that combines industry-leading tools — Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud — into a seamless experience. Docker’s flexible plans ensure there’s a solution for every developer and every team, from individual contributors to large enterprises.

Get started today

Ready to simplify your development workflows? Start your Docker journey now and equip your team with the tools they need to innovate, collaborate, and deliver with confidence.

Looking for tips and tricks? Subscribe to Docker Navigator for the latest updates and insights delivered straight to your inbox.

Learn more

Docker at Cloud Expo Asia: GenAI, Security, and New Innovations

Par : Yiwen Xu
22 octobre 2024 à 15:23

Cloud Expo Asia 2024 in Singapore drew thousands of cloud professionals and tech business leaders to explore and exchange the latest in cloud computing, security, GenAI, sustainability, DevOps, and more. At our Cloud Expo Asia booth, Docker showcased our latest innovations in AI integration, containerization, security best practices, and updated product offerings. Here are a few highlights from our experience at the event.

2400x1260 evergreen docker blog a

AI/ML and GenAI everywhere

AI/ML and GenAI were hot topics at Cloud Expo Asia. Docker CPO Giri Sreenivas’s talk on Transforming App Development: Docker’s Advanced Containerization and AI Integration highlighted that GenAI impacts software in two big ways — it accelerates product development and creates new types of products and experiences. He discussed how containers are an ideal tool for containerizing GenAI workflows in development, ensuring consistency across CI/CD pipelines and reproducibility across diverse platforms in production.

cloud expo asia 2024 f1
Docker Chief Product Officer Giri Sreenivas’s talk drew an overflow crowd.

Sreenivas highlighted the Docker extension for GitHub Copilot as an example of how Docker helps empower development teams to focus on innovation — closing the gap from the first line of code to production. Sreenivas also gave a sneak peek into upcoming products designed to streamline GenAI development to illustrate Docker’s commitment to evolving solutions to meet emerging needs. 

Adopting security best practices and shifting left

Developer efficiency and security were also popular themes at the event. When Sreenivas mentioned in his talk that security vulnerabilities that cost dollars to fix early in development would cost hundreds of dollars later in production, members of the audience nodded in agreement.

Docker CTO Justin Cormack gave a keynote address titled “The Docker Effect: Driving Developer Efficiency and Innovation in a Hybrid World.” He discussed how implementing best practices and investing in the inner loop are crucial for today’s development teams. 

One best practice, for example, is shifting left and identifying problems as quickly as possible in the software development lifecycle. This approach improves efficiency and reduces costs by detecting and addressing software issues earlier before they become expensive problems.

cloud expo asia 2024 f2
At Docker CTO Justin Cormack’s talk, attendees were eager to snap pictures of every slide.

Cormack also provided a few tips for meeting the security and control needs of modern enterprises with a layered approach. Start with key building blocks, he explained, such as trusted content, which provides dev teams with a good foundation to build securely from the start. 

A pyramid with the title Modern Enterprises Need a Layered Approach to Security and Control. The pyramid, from top down (or reverse order): Deliver a secure end product, Build on a secure platform, and Start with a secure Foundation.
Docker CTO Justin Cormack’s recommendations on meeting the security and control needs of modern enterprises.

At the Docker event booth, we demonstrated Docker Scout, which helps development teams identify, analyze, and remediate security vulnerabilities early in the dev process. Docker Business customers can take advantage of enterprise controls, letting admins, IT teams, and security teams continuously monitor and manage risk and compliance with confidence. 

cloud expo asia 2024 f4
After four hours of demos at the Docker booth, senior software engineer Chase Frankenfeld was still enthusiastically discussing Docker products, while our CEO Scott Johnston listened attentively to an attendee’s questions.

New Docker innovations and updated plan

From students to C-level executives who visited our booth, everyone was eager to learn more about containers and Docker. People lined up to see an end-to-end demo of how the suite of Docker products, such as Docker Desktop, Docker Hub, Docker Build Cloud, and Docker Scout, work together seamlessly to enable development teams to work more efficiently. 

Attendees also had the opportunity to learn more about Docker’s updated plans, which makes accessing the full suite of Docker products and solutions easy, with options for individual developers, small teams, and large enterprises.

cloud expo asia 2024 f5
Senior software engineer Maxime Clement explains Docker’s updated plans and demos Docker products to booth visitors.

Thanks, Cloud Expo Asia!

We enjoyed our conversations with event attendees and appreciate everyone who helped make this such a successful event. Thank you to the organizers, speakers, sponsors, and the community for a productive, information-packed experience.

cloud expo asia 2024 f6
What’s better than Docker swag? Docker swag in a claw machine.

From accelerating app development, supporting best practices of shifting left, meeting the security and control needs of modern enterprises, and innovating with GenAI, Docker wants to be your trusted partner to navigate the challenges in modern app development. 

Explore our Docker updated plans to learn how Docker can empower your teams, or contact our sales team to discover how we can help you innovate with confidence.

Learn more

❌
❌