Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Securing Model Context Protocol: Safer Agentic AI with Containers

6 mai 2025 à 18:38

Model Context Protocol (MCP) tools remain primarily in the hands of early adopters, but broader adoption is accelerating. Alongside this growth, MCP security concerns are becoming more urgent. By increasing agent autonomy, MCP tools introduce new risks related to misalignment between agent behavior and user expectations and uncontrolled execution. These systems also present a novel attack surface, creating new software supply chain threats. As a result, MCP adoption raises critical questions about trust, isolation, and runtime control before these systems are integrated into production environments.

Where MCP tools fall short on security

Most of us first experimented with MCP tools by configuring files like the one shown below. This workflow is fast, flexible, and productive, ideal for early experimentation. But it also comes with trade-offs. MCP servers are pulled directly from the internet, executed on the host machine, and configured with sensitive credentials passed as plaintext environment variables. It has been like setting off fireworks in your living room: it’s thrilling, but it’s not very safe.

{
  "mcpServers": {
    "command": "npx",
    "args": [
      "-y",
      "@org/mcp-server",
      "--user", "me"
    ],
    "env": {
      "SECRET_API_KEY": "YOUR_API_KEY_HERE"
    }
  }
}

As MCP tools move closer to production use, they force us to confront a set of foundational questions:

Can we trust the MCP server?

Can we guarantee the right software is installed on the host? Without that baseline, reproducibility and reliability fall apart. How do we verify the provenance and integrity of the MCP server itself? If we can’t trace where it came from or confirm what it contains, we can’t trust it to run safely. Even if it runs, how do we know it hasn’t been tampered with — either before it reached us or while it’s executing?

Are we managing secrets and access securely?

Secret management also becomes a pressing concern. Environment variables are convenient, but they’re not secure. We need ways to safely inject sensitive data into only the runtimes permitted to read it and nowhere else. The same goes for access control. As teams scale up their use of MCP tools, it becomes essential to define which agents are allowed to talk to which servers and ensure those rules are enforced at runtime.

blog MCP security Reddit

Figure 1: Discussions on not storing secrets in.env on Reddit. Credit: amirshk

How do we detect threats early? 

And then there’s the question of detection. Are we equipped to recognize the kinds of threats that are emerging around MCP tools? From prompt injection to malicious server responses, new attack vectors are already appearing. Without purpose-built tooling and clear security standards, we risk walking into these threats blind. Some recent threat patterns include:

  • MCP Rug Pull – A malicious MCP server can perform a “rug pull” by altering a tool’s description after it’s been approved by the user.
  • MCP Shadowing – A malicious server injects a tool description that alters the agent’s behavior toward a trusted service or tool. 
  • Tool Poisoning – Malicious instructions in MCP tool descriptions, hidden from users but readable by AI models.

What’s clear is that the practices that worked for early-stage experimentation won’t scale safely. As adoption grows, the need for secure, standardized mechanisms to package, verify, and run MCP servers becomes critical. Without them, the very autonomy that makes MCP tools powerful could also make them dangerous.

Why Containers for MCP servers

Developers quickly realized that the same container technology used to deliver cloud-native applications is also a natural fit for safely powering agentic systems. Containers aren’t just about packaging, they give us a controlled runtime environment where we can add guardrails and build a safer path toward adopting MCP servers.

Making MCP servers portable and secure 

Most of us are familiar with how containers are used to move software around, providing runtime consistency and easy distribution. Containers also provide a strong layer of isolation between workloads, helping prevent one application from interfering with another or with the host system. This isolation limits the blast radius of a compromise and makes it easier to enforce least-privilege access. In addition, containers can provide us with verification of both provenance and integrity. This continues to be one of the important lessons from software supply chain security. Together, these properties help mitigate the risks of running untrusted MCP servers directly on the host.

As a first step, we can use what we already know about cloud native delivery and simply distribute the MCP servers in a container. 

{
  "mcpServers": {
    "mcpserver": {
      "command": "docker",
      "args": [
        "run", "-i", "--rm",
        "org/mcpserver:latest",
        "--user", "me"
      ],
      "env": {
        "SECRET_API_KEY": "YOUR_API_KEY_HERE"
      }
    }
  }
}

But containerizing the server is only half the story. Developers still would need to specify arguments for the MCP server runtime and secrets. If those arguments are misconfigured, or worse, intentionally altered, they could expose sensitive data or make the server unsafe to run. 

In the next section, we’ll cover key design considerations, guardrails, and best practices for mitigating these risks.

Designing secure containerized architectures for MCP servers and clients

Containers provide a solid foundation for securely running MCP servers, but they’re just the beginning. It’s important to consider additional guardrails and designs, such as how to handle secrets, defend against threats, and manage tool selection and authorization as the number of MCP servers and clients increases. 

Secure secrets handling

When these servers require runtime configuration secrets, container-based solutions must provide a secure interface for users to supply that data. Sensitive information like credentials, API keys, or OAuth access tokens should then be injected into only the authorized container runtimes. As with cloud-native deployments, secrets remain isolated and scoped to the workloads that need them, reducing the risk of accidental exposure or misuse.

Defenses against new MCP threats

Many of the emerging threats in the MCP ecosystem involve malicious servers attempting to trick agents and MCP servers into taking actions that conflict with the user’s intent. These attacks often begin with poisoned data flowing from the server to the client.

To mitigate this, it’s recommended to route all MCP client traffic through a single connection endpoint, a MCP Gateway, or a proxy built on top of containers. Think of MCP servers like passengers at an airport: by establishing one centralized security checkpoint (the Gateway), you ensure that everyone is screened before boarding the plane (the MCP client). This Gateway becomes the critical interface where threats like MCP Rug Pull Attacks, MCP Shadowing, and Tool Poisoning can be detected early and stopped. Mitigations include:

  • MCP Rug Pull: Prevents a server from changing its tool description after user consent. Clients must re-authorize if a new version is introduced.
  • MCP Shadowing: Detects agent sessions with access to sets of tools with semantically close descriptions, or outright conflicts.
  • Tool Poisoning: Uses heuristics or signature-based scanning to detect suspicious patterns in tool metadata, such as manipulative prompts or misleading capabilities, that are common in poisoning attacks.

Managing MCP server selection and authorization

As agentic systems evolve, it’s important to distinguish between two separate decisions: which MCP servers are trusted across an environment, and which are actually needed by a specific agent. The first defines a trusted perimeter, determining which servers can be used. The second is about intent and scope — deciding which servers should be used by a given client.

With the number of available MCP servers expected to grow rapidly, most agents will only require a small, curated subset. Managing this calls for clear policies around trust, selective exposure, and strict runtime controls. Ideally, these decisions should be enforced through platforms that already support container-based distribution, with built-in capabilities for storing, managing, and securely sharing workloads, along with the necessary guardrails to limit unintended access.

MCP security best practices

As the MCP spec evolves, we are already seeing helpful additions such as tool-level annotations like readOnlyHint and destructiveHint.  A readOnlyHint can direct the runtime to mount file systems in read-only mode, minimizing the risk of unintentional changes. Networking hints can isolate an MCP from the internet entirely or restrict outbound connections to a limited set of routes. Declaring these annotations in your tool’s metadata is strongly recommended. They can be enforced at container runtime and help drive adoption — users are more likely to trust and run tools with clearly defined boundaries.

We’re starting by focusing on developer productivity. But making these guardrails easy to adopt and test means they won’t get in the way, and that’s a critical step toward building safer, more resilient agentic systems by default.

How Docker helps  

Containers offer a natural way to package and isolate MCP tools, making them easier and safer to run. Docker extends this further with its latest MCP Catalog and Toolkit, streamlining how trusted tools are discovered, shared, and executed.

While many developers know that Docker provides an API for containerized workloads, the Docker MCP Toolkit builds on that by enabling MCP clients to securely connect to any trusted server listed in your MCP Catalog. This creates a controlled interface between agents and tools, with the familiar benefits of container-based delivery: portability, consistency, and isolation.

blog MCP security container

Figure 2: Docker MCP Catalog and Toolkit securely connects MCP servers to clients by running them in containers

The MCP Catalog, a part of Docker Hub, helps manage the growing ecosystem of tools by letting you identify trusted MCP servers while still giving you the flexibility to configure your MCP clients. Developers can not only decide which servers to make available to any agent, but also scope specific servers to their agents. The MCP Toolkit simplifies this further by exposing any set of trusted MCP servers through a single, unified connection, the MCP Gateway. 

Developers stay in control, defining how secrets are stored and which MCP servers are authorized to access them. Each server is referenced by a URL that points to a fully configured, ready-to-run Docker container. Since the runtime handles both content and configuration, agents interact only with MCP runtimes that are reproducible, verifiable, and self-contained.   These runtimes are tamper-resistant, isolated, and constrained to access only the resources explicitly granted by the user. Since all MCP messages pass through one gateway, the MCP Toolkit offers a single enforcement point for detecting threats before they become visible to the MCP client. 

Going back to the earlier example, our configuration is now a single connection to the Catalog with an allowed set of configured MCP server containers. MCP client sees a managed view of configured MCP servers over STDIO. The result: MCP clients have a safe connection to the MCP ecosystem!

{
  "mcpServers": {
    "mcpserver": {
      "command": "docker",
      "args": [
        "run", "-i", "--rm",
        "alpine/socat", "STDIO", "TCP:host.docker.internal:8811"
      ],
    }
  }
}

Summary

We’re at a pivotal point in the evolution of MCP tool adoption. The ecosystem is expanding rapidly, and while it remains developer-led, more users are exploring ways to safely extend their agentic systems. Containers are proving to be the ideal delivery model for MCP tools — providing isolation, reproducibility, and security with minimal friction.

Docker’s MCP Catalog and Toolkit build on this foundation, offering a lightweight way to share and run trusted MCP servers. By packaging tools as containers, we can introduce guardrails without disrupting how users already consume MCP from their existing clients. The Catalog is compatible with any MCP client today, making it easy to get started without vendor lock-in.

Our goal is to support this fast-moving space by making MCP adoption as safe and seamless as possible, without getting in the way of innovation. We’re excited to keep working with the community to make MCP adoption not just easy and productive, but secure by default.

Learn more

Highlights from Microsoft Build: Docker’s Innovations with AI and Windows on Arm

30 mai 2024 à 17:21

Windows is back! That is my big takeaway from Microsoft Build last week. In recent years, Microsoft has focused on a broader platform that includes Windows and Linux and has adapted to the centrality of the browser in the modern world. But last week’s event was dominated by the launch of the Copilot+ PC, initially launched with Arm-based machines. We announced Docker Desktop support for Windows on Arm (long-awaited by many of you!) to accompany this exciting development.

2400x1260 ms build 2024

The buzz around Arm-based machines

Sadly, we did not get to try any of the new hardware in-depth, but there was a lot of love and longing for the Snapdragon Dev Kit from those who had tried it and our team back home. Arm Windows machines will ship from major manufacturers soon. Developers are power users of their machines, and AI has pushed up the local performance requirements, which means more, faster machines sooner. What’s not to like? (Well, the Recall feature preview won that prize.)

Copilots everywhere

It wasn’t all about Windows. Copilots were everywhere, including the opening keynote and announcing our partner collaboration with Docker’s extension for GitHub Copilot. If you missed it and thought Copilot was just the original assistant from GitHub, now there are 365 Copilots for everything from Excel to Power BI to Minecraft. Just emerging is the ability to build your own Copilots and an ecosystem of Copilots. Docker launched in the first wave of Copilot integrations, initially integrating into GitHub Copilot chat — with more to come. Check out our blog post for more on how the extension can help you with Dockerfiles and Compose files and how to use Docker.

Satya Nadella presents GitHub Copilot Extensions, including Docker, at Microsoft Build 2024.
Satya Nadella presents GitHub Copilot Extensions, including Docker, at Microsoft Build 2024.

Connecting with the community

The event’s vibe wasn’t just about the launches; it was about connecting with the people. As a hybrid event, Microsoft Build had a lively ongoing broadcast that was great fun and was being produced right across from the Docker booth. 

The Docker booth was constantly busy, with a stream of people with questions, requests, problems, and ideas, ranging from new Docker users to experienced dockhands and those checking out our new products, like Docker Build Cloud, learning more about how that can Secure Dockerized apps in the Microsoft ecosystem, and getting hands-on with features like Docker Debug in Docker Desktop.

f2 docker booth msbuild 2024
Justin Cormack recording in front of the Docker booth at Microsoft Build 2024.

Better together with Docker and Microsoft

I also really enjoyed getting the chance to share a handful of the better-together solutions that we’re collaborating on with Microsoft. You can watch my session from Thursday, Optimizing the Microsoft developer experience with Docker. And in a short session, Innovating the SDLC with insights from Docker, I shared a fresh perspective on how to navigate and streamline workflows through the SDLC. 

Microsoft Build was a fantastic opportunity to showcase our innovations and connect with the Microsoft developer community. We are excited about the solutions we are bringing to the Microsoft ecosystem and look forward to continuing our collaboration to enhance the developer experience with Docker and Microsoft’s better-together solutions.

Watch Docker talks at Microsoft Build

Also check out

💾

Join an exclusive interview with Docker's CTO, Justin Cormack, discussing how Docker is revolutionizing the SDLC. Learn about streamlining workflows, enhanci...

OpenSSH and XZ/liblzma: A Nation-State Attack Was Thwarted, What Did We Learn?

1 avril 2024 à 19:05
Black padlock on light blue digital background

I have been recently watching The Americans, a decade-old TV series about undercover KGB agents living disguised as a normal American family in Reagan’s America in a paranoid period of the Cold War. I was not expecting this weekend to be reading mailing list posts of the same type of operation being performed on open source maintainers by agents with equally shadowy identities (CVE-2024-3094).

As The Grugq explains, “The JK-persona hounds Lasse (the maintainer) over multiple threads for many months. Fortunately for Lasse, his new friend and star developer is there, and even more fortunately, Jia Tan has the time available to help out with maintenance tasks. What luck! This is exactly the style of operation a HUMINT organization will run to get an agent in place. They will position someone and then create a crisis for the target, one which the agent is able to solve.”

The operation played out over two years, getting the agent in place, setting up the infrastructure for the attack, hiding it from various tools, and then rushing to get it into Linux distributions before some recent changes in systemd were shipped that would have stopped this attack from working.

An equally unlikely accident resulted when Andres Freund, a Postgres maintainer, discovered the attack before it had reached the vast majority of systems, from a probably accidental performance slowdown. Andres says, “I didn’t even notice it while logging in with SSH or such. I was doing some micro-benchmarking at the time and was looking to quiesce the system to reduce noise. Saw sshd processes were using a surprising amount of CPU, despite immediately failing because of wrong usernames etc. Profiled sshd. Which showed lots of cpu time in code with perf unable to attribute it to a symbol, with the dso showing as liblzma. Got suspicious. Then I recalled that I had seen an odd valgrind complaint in my automated testing of Postgres, a few weeks earlier, after some package updates were installed. Really required a lot of coincidences.” 

It is hard to overstate how lucky we were here, as there are no tools that will detect this vulnerability. Even ex-post it is not possible to detect externally as we do not have the private key needed to trigger the vulnerability, and the code is very well hidden. While Linus’s law has been stated as “given enough eyeballs all bugs are shallow,” we have seen in the past this is not always true, or there are just not enough eyeballs looking at all the code we consume, even if this time it worked.

In terms of immediate actions, the attack appears to have been targeted at subset of OpenSSH servers patched to integrate with systemd. Running SSH servers in containers is rare, and the initial priority should be container hosts, although as the issue was caught early it is likely that few people updated. There is a stream of fixes to liblzma, the xz compression library where the exploit was placed, as the commits from the last two years are examined, although at present there is no evidence that there are exploits for any software other than OpenSSH included. In the Docker Scout web interface you can search for “lzma” in package names, and issues will be flagged in the “high profile vulnerabilities” policy.

So many commentators have simple technical solutions, and so many vendors are using this to push their tools. As a technical community, we want there to be technical solutions to problems like this. Vendors want to sell their products after events like this, even though none even detected it. Rewrite it in Rust, shoot autotools, stop using GitHub tarballs, and checked-in artifacts, the list goes on. These are not bad things to do, and there is no doubt that understandability and clarity are valuable for security, although we often will trade them off for performance. It is the case that m4 and autotools are pretty hard to read and understand, while tools like ifunc allow dynamic dispatch even in a mostly static ecosystem. Large investments in the ecosystem to fix these issues would be worthwhile, but we know that attackers would simply find new vectors and weird machines. Equally, there are many naive suggestions about the people, as if having an identity for open source developers would solve a problem, when there are very genuine people who wish to stay private while state actors can easily find fake identities, or “just say no” to untrusted people. Beware of people bringing easy solutions, there are so many in this hot-take world.

Where can we go from here? Awareness and observability first. Hyper awareness even, as we see in this case small clues matter. Don’t focus on the exact details of this attack, which will be different next time, but think more generally. Start by understanding your organization’s software consumption, supply chain, and critical points. Ask what you should be funding to make it different. Then build in resilience. Defense in depth, and diversity — not a monoculture. OpenSSH will always be a target because it is so widespread, and the OpenBSD developers are doing great work and the target was upstream of them because of this. But we need a diverse ecosystem with multiple strong solutions, and as an organization you need second suppliers for critical software. The third critical piece of security in this era is recoverability. Planning for the scenario in which the worst case has happened and understanding the outcomes and recovery process is everyone’s homework now, and making sure you are prepared with tabletop exercises around zero days. 

This is an opportunity for all of us to continue working together to strengthen the open source supply chain, and to work on resilience for when this happens next. We encourage dialogue and discussion on this within Docker communities.

Learn more

Docker Acquires Mutagen for Continued Investment in Performance and Flexibility of Docker Desktop

27 juin 2023 à 17:00

I’m excited to announce that Docker, voted the most-used and most-desired tool in Stack Overflow’s 2023 Developer Survey, has acquired Mutagen IO, Inc., the company behind the open source Mutagen file synchronization and networking technologies that enable high-performance remote development. Mutagen’s synchronization and forwarding capabilities facilitate the seamless transfer of code, binary artifacts, and network requests between arbitrary locations, connecting local and remote development environments. When combined with Docker’s existing developer tools, Mutagen unlocks new possibilities for developers to innovate and accelerate development velocity with local and remote containerized development.

“Docker is more than a container tool. It comprises multiple developer tools that have become the industry standard for self-service developer platforms, empowering teams to be more efficient, secure, and collaborative,” says Docker CEO Scott Johnston. “Bringing Mutagen into the Docker family is another example of how we continuously evolve our offering to meet the needs of developers with a product that works seamlessly and improves the way developers work.”

Mutagen banner 2400x1260 Docker logo and Mutagen logo on red background

The Mutagen acquisition introduces novel mechanisms for developers to extract the highest level of performance from their local hardware while simultaneously opening the gateway to the newest remote development solutions. We continue scaling the abilities of Docker Desktop to meet the needs of the growing number of developers, businesses, and enterprises relying on the platform.

 “Docker Desktop is focused on equipping every developer and dev team with blazing-fast tools to accelerate app creation and iteration by harnessing the combined might of local and cloud resources. By seamlessly integrating and magnifying Mutagen’s capabilities within our platform, we will provide our users and customers with unrivaled flexibility and an extraordinary opportunity to innovate rapidly,” says Webb Stevens, General Manager, Docker Desktop.

 “There are so many captivating integration and experimentation opportunities that were previously inaccessible as a third-party offering,” says Jacob Howard, the CEO at Mutagen. “As Mutagen’s lead developer and a Docker Captain, my ultimate goal has always been to enhance the development experience for Docker users. As an integral part of Docker’s technology landscape, Mutagen is now in a privileged position to achieve that goal.”

Jacob will join Docker’s engineering team, spearheading the integration of Mutagen’s technologies into Docker Desktop and other Docker products.

You can get started with Mutagen today by downloading the latest version of Docker Desktop and installing the Mutagen extension, available in the Docker Extensions Marketplace. Support for current Mutagen offerings, open source and paid, will continue as we develop new and better integration options.

FAQ | Docker Acquisition of Mutagen

With Docker’s acquisition of Mutagen, you’re sure to have questions. We’ve answered the most common ones in this FAQ.

As with all of our open source efforts, Docker strives to do right by the community. We want this acquisition to benefit everyone — community and customer — in keeping with our developer obsession.

What will happen to Mutagen Pro subscriptions and the Mutagen Extension for Docker Desktop?

Both will continue as we evaluate and develop new and better integration options. Existing Mutagen Pro subscribers will see an update to the supplier on their invoices, but no other billing changes will occur.

Will Mutagen become closed-source?

There are no plans to change the licensing structure of Mutagen’s open source components. Docker has always valued the contributions of open source communities.

Will Mutagen or its companion projects be discontinued?

There are no plans to discontinue any Mutagen projects. 

Will people still be able to contribute to Mutagen’s open source projects?

Yes! Mutagen has always benefited from outside collaboration in the form of feedback, discussion, and code contributions, and there’s no desire to change that relationship. For more information about how to participate in Mutagen’s development, see the contributing guidelines.

What about other downstream users, companies, and projects using Mutagen?

Mutagen’s open source licenses continue to allow the embedding and use of Mutagen by other projects, products, and tooling.

Who will provide support for Mutagen projects and products?

In the short term, support for Mutagen’s projects and products will continue to be provided through the existing support channels. We will work to merge support into Docker’s channels in the near future.

Is this replacing Virtiofs, gRPC-FUSE, or osxfs?

No, virtual filesystems will continue to be the default path for bind mounts in Docker Desktop. Docker is continuing to invest in the performance of these technologies.

How does Mutagen compare with other virtual or remote filesystems?

Mutagen is a synchronization engine rather than a virtual or remote filesystem. Mutagen can be used to synchronize files to native filesystems, such as ext4, trading typically imperceptible amounts of latency for full native filesystem performance.

How does Mutagen compare with other synchronization solutions?

Mutagen focuses primarily on configuration and functionality relevant to developers.

How can I get started with Mutagen?

To get started with Mutagen, download the latest version of Docker Desktop and install the Mutagen Extension from the Docker Desktop Extensions Marketplace.

❌
❌