Vue normale
-
Collabnix
- Complete Guide: Migrating from NGINX Ingress to Kubernetes Gateway API in 2025Learn how to migrate from NGINX Ingress Controller to Kubernetes Gateway API before March 2026 deadline. Step-by-step guide with ingress2gateway tool, code examples, and best practices.
-
Collabnix
- How To Install Minikube on Ubuntu (Linux): Complete Step-by-Step GuideMinikube is a powerful tool that lets you run Kubernetes locally on your Ubuntu machine. Whether you’re a developer testing Kubernetes applications or learning container orchestration, Minikube provides a lightweight, single-node Kubernetes cluster that’s perfect for development and testing purposes. In this comprehensive guide, I’ll walk you through everything you need to know about installing […]
How To Install Minikube on Ubuntu (Linux): Complete Step-by-Step Guide
-
Collabnix
- Why Kubernetes Runs Better on GPUs?The convergence of Kubernetes and GPU computing has fundamentally transformed how organizations deploy and scale artificial intelligence, machine learning, and data science workloads. As GPU-accelerated applications become increasingly mainstream, understanding why Kubernetes excels at orchestrating GPU resources is essential for modern infrastructure teams. The Evolution of GPU Support in Kubernetes Kubernetes revolutionized container orchestration when […]
Why Kubernetes Runs Better on GPUs?
-
Collabnix
- GPU Scheduling in Kubernetes: A Complete GuideUnderstanding GPU Scheduling in Kubernetes As artificial intelligence and machine learning workloads continue to dominate enterprise computing, Kubernetes has emerged as the de facto platform for orchestrating GPU-accelerated applications. With ‘Kubernetes AI’ experiencing a 300% increase in search volume in 2025 and 48% of organizations now running AI/ML workloads on Kubernetes, understanding GPU scheduling and […]
GPU Scheduling in Kubernetes: A Complete Guide
-
Docker
- From Compose to Kubernetes to Cloud: Designing and Operating Infrastructure with KanvasDocker has long been the simplest way to run containers. Developers start with a docker-compose.yml file, run docker compose up, and get things running fast. As teams grow and workloads expand into Kubernetes and integrate into cloud services, simplicity fades. Kubernetes has become the operating system of the cloud, but your clusters rarely live in isolation. Real-world platforms are a complex intermixing of proprietary cloud services – AWS S3 buckets, Azure Virtual Machines, Google Cloud SQ
From Compose to Kubernetes to Cloud: Designing and Operating Infrastructure with Kanvas
Docker has long been the simplest way to run containers. Developers start with a docker-compose.yml file, run docker compose up, and get things running fast.
As teams grow and workloads expand into Kubernetes and integrate into cloud services, simplicity fades. Kubernetes has become the operating system of the cloud, but your clusters rarely live in isolation. Real-world platforms are a complex intermixing of proprietary cloud services – AWS S3 buckets, Azure Virtual Machines, Google Cloud SQL databases – all running alongside your containerized workloads. You and your teams are working with clusters and clouds in a sea of YAML.
Managing this hybrid sprawl often means context switching between Docker Desktop, the Kubernetes CLI, cloud provider consoles, and infrastructure as code. Simplicity fades as you juggle multiple distinct tools.
Bringing clarity back from this chaos is the new Docker Kanvas Extension from Layer5 – a visual, collaborative workspace built right into Docker Desktop that allows you to design, deploy, and operate not just Kubernetes resources, but your entire cloud infrastructure across AWS, GCP, and Azure.
What Is Kanvas?
Kanvas is a collaborative platform designed for engineers to visualize, manage, and design multi-cloud and Kubernetes-native infrastructure. Kanvas transforms the concept of infrastructure as code into infrastructure as design. This means your architecture diagram is no longer just documentation – it is the source of truth that drives your deployment. Built on top of Meshery (one of the Cloud Native Computing Foundation’s highest-velocity open source projects), Kanvas moves beyond simple Kubernetes manifests by using Meshery Models – definitions that describe the properties and behavior of specific cloud resources. This allows Kanvas to support a massive catalog of Infrastructure-as-a-Service (IaaS) components:
- AWS: Over 55+ services (e.g., EC2, Lambda, RDS, DynamoDB).
- Azure: Over 50+ components (e.g., Virtual Machines, Blob Storage, VNet).
- GCP: Over 60+ services (e.g., Compute Engine, BigQuery, Pub/Sub).
Kanvas bridges the gap between abstract architecture and concrete operations through two integrated modes: Designer and Operator.
Designer Mode (declarative mode)
Designer mode serves as a “blueprint studio” for cloud architects and DevOps teams, emphasizing declarative modeling – describing what your infrastructure should look like rather than how to build it step-by-step – making it ideal for GitOps workflows and team-based planning.
- Build and iterate collaboratively: Add annotations, comments for design reviews, and connections between components to visualize data flows, architectures, and relationships.
- Dry-run and validate deployments: Before touching production, simulate your deployments by performing a dry-run to verify that your configuration is valid and that you have the necessary permissions.
- Import and export: Brownfield designs by connecting your existing clusters or importing Helm charts from your GitHub repositories.
- Reuse patterns, clone, and share: Pick from a catalog of reference architectures, sample configurations, and infrastructure templates, so you can start from proven blueprints rather than a blank design. Share designs just as you would a Google Doc. Clone designs just as you would a GitHub repo. Merge designs just as you would in a pull request.
Operator Mode (imperative mode)
Kanvas Operator mode transforms static diagrams into live, managed infrastructure. When you switch to Operator mode, Kanvas stops being a configuration tool and becomes an active infrastructure console, using Kubernetes controllers (like AWS Controllers for Kubernetes (ACK) or Google Config Connector) to actively manage your designs.
Operator mode allows you to:
- Load testing and performance management: With Operator’s built-in load generator, you can execute stress tests and characterize service behavior by analyzing latency and throughput against predefined performance profiles, establishing baselines to measure the impact of infrastructure configuration changes made in Designer mode.
- Multi-player, interactive terminal: Open a shell session with your containers and execute commands, stream and search container logs without leaving the visual topology. Streamline your troubleshooting by sharing your session with teammates. Stay in-context and avoid context-switching to external command-line tools like kubectl.
- Integrated observability: Use the Prometheus integration to overlay key performance metrics (CPU usage, memory, request latency) and quickly find spot “hotspots” in your architecture visually. Import your existing Grafana dashboards for deeper analysis.
- Multi-cluster, multi-cloud operations: Connect multiple Kubernetes clusters (across different clouds or regions) and manage workloads that span across a GKE cluster and an EKS cluster in a single topology view.them all from a single Kanvas interface.
While Kanvas Designer mode is about intent (what you want to build), Operator mode is about reality (what is actually running). Kanvas Designer mode and Operator mode are simply two, tightly integrated sides of the same coin.
With this understanding, let’s see both modes in-action in Docker Desktop.
Walk-Through: From Compose to Kubernetes in Minutes
With the Docker Kanvas extension (install from Docker Hub), you can take any existing Docker Compose file and instantly see how it translates into Kubernetes, making it incredibly easy to understand, extend, and deploy your application at scale.
The Docker Samples repository offers a plethora of samples. Let’s use the Spring-based PetClinic example below.
# sample docker-compose.yml
services:
petclinic:
build:
context: .
dockerfile: Dockerfile.multi
target: development
ports:
- 8000:8000
- 8080:8080
environment:
- SERVER_PORT=8080
- MYSQL_URL=jdbc:mysql://mysqlserver/petclinic
volumes:
- ./:/app
depends_on:
- mysqlserver
mysqlserver:
image: mysql:8
ports:
- 3306:3306
environment:
- MYSQL_ROOT_PASSWORD=
- MYSQL_ALLOW_EMPTY_PASSWORD=true
- MYSQL_USER=petclinic
- MYSQL_PASSWORD=petclinic
- MYSQL_DATABASE=petclinic
volumes:
- mysql_data:/var/lib/mysql
- mysql_config:/etc/mysql/conf.d
volumes:
mysql_data:
mysql_config:
With your Docker Kanvas extension installed:
- Import sample app: Save the PetClinic docker-compose.yml file to your computer, then click to import or drag and drop the file onto Kanvas.
Kanvas renders an interactive topology of your stack showing services, dependencies (like MySQL), volumes, ports, and configurations, all mapped to their Kubernetes equivalents. Kanvas performs this rendering in phases, applying an increasing degree of scrutiny in the evaluation performed in each phase. Let’s explore the specifics of this tiered evaluation process in a moment.
- Enhance the PetClinic design
From here, you can enhance the generated design in a visual, no-YAML way:
- Add a LoadBalancer, Ingress, or ConfigMap
- Configure Secrets for your database URL or sensitive environment variables
- Modify service relationships or attach new components
- Add comments or any other annotations.
Importantly, Kanvas saves your design as you make changes. This gives you production-ready deployment artifacts generated directly from your Compose file.
- Deploy to a cluster
With one click, deploy the design to any cluster connected to Docker Desktop or any other remote cluster. Kanvas handles the translation and applies your configuration.
- Switch modes and interact with your app
After deploying (or when managing an existing workload), switch to Operator mode to observe and manage your deployed design. You can:
- Inspect Deployments, Services, Pods, and their relationships.
- Open a terminal session with your containers for quick debugging.
- Tail and search your container logs and monitor resource metrics.
- Generate traffic and analyze the performance of your deployment under heavy load.
- Share your Operator View with teammates for collaborative management.
Within minutes, a Compose-based project becomes a fully managed Kubernetes workload, all without leaving Docker Desktop. This seamless flow from a simple Compose file to a fully managed, operable workload highlights the ease by which infrastructure can be visually managed, leading us to consider the underlying principle of Infrastructure as Design.
Infrastructure as Design
Infrastructure as design elevates the visual layout of your stack to be the primary driver of its configuration, where the act of adjusting the proximity and connectedness of components is one in the same as the process of configuring your infrastructure. In other words, the presence, absence, proximity, or connectedness of individual components (all of which affect how one component relates to another) respectively augments the underlying configuration of each. Kanvas is highly intelligent in this way, understanding at a very granular level of detail how each individual component relates to all other components and will augment the configuration of those components accordingly.
Understand that the process by which Kanvas renders the topology of your stack’s architecture in phases. The initial rendering involves a lightweight analysis of each component, establishing a baseline for the contents of your new design. A subsequent phase of rendering applies a higher level of sophistication in its analysis as Kanvas introspect the configuration of each of your stack’s components, their interdependencies, and proactively evaluates the manner in which each component relates to one another. Kanvas will add, remove, and update the configuration of your components as a result of this relationship evaluation.
This process of relationship evaluation is ongoing. Every time you make a change to your design, Kanvas re-evaluates each component configuration.
To offer an example, if you were to bring a Kubernetes Deployment in the same vicinity of the Kubernetes Namespace you will find that one magnetizes to the next and that your Deployment is visually placed inside of the Namespace, and at the same time, that Deployment’s configuration is mutated to include its new Namespace designation. Kanvas proactively evaluates and mutates the configuration of the infrastructure resources in your design as you make changes.
This ability for Kanvas to intelligently interpret and adapt to changes in your design—automatically managing configuration and relationships—is the key to achieving infrastructure as design. This power comes from a sophisticated system that gives Kanvas a level of intelligence, but with the reliability of a policy-driven engine.
AI-like Intelligence, Anchored by Deterministic Truth
In an era where generative AI dramatically accelerates infrastructure design, the risk of “hallucinations”—plausible but functionally invalid configurations—remains a critical bottleneck. Kanvas solves this by pairing the generative power of AI with a rigid, deterministic policy engine.
This engine acts as an architectural guardrail, offering you precise control over the degree to which AI is involved in assessing configuration correctness. It transforms designs from simple visual diagrams into validated, deployable blueprints.
While AI models function probabilistically, Kanvas’s policy engine functions deterministically, automatically analyzing designs to identify, validate, and enforce connections between components based on ground-truth rules. Each of these rules are statically defined and versioned in their respective Kanvas models.
- Deep Contextualization: The evaluation goes beyond simple visualization. It treats relationships as context-aware and declarative, interpreting how components interact (e.g., data flows, dependencies, or resource sharing) to ensure designs are not just imaginative, but deployable and compliant.
- Semantic Rigor: The engine distinguishes between semantic relationships (infrastructure-meaningful, such as a TCP connection that auto-configures ports) and non-semantic relationships (user-defined visuals, like annotations). This ensures that aesthetic choices never compromise infrastructure integrity.
Kanvas acknowledges that trust is not binary. You maintain sovereignty over your designs through granular controls that dictate how the engine interacts with AI-generated suggestions:
- “Human-in-the-Loop” Slider: You can modulate the strictness of the policy evaluation. You might allow the AI to suggest high-level architecture while enforcing strict policies on security configurations (e.g., port exposure or IAM roles).
- Selective Evaluation: You can disable evaluations via preferences for specific categories. For example, you may trust the AI to generate a valid Kubernetes Service definition, but rely entirely on the policy engine to validate the Ingress controller linking to it.
Kanvas does not just flag errors; it actively works to resolve them using sophisticated detection and correction strategies.
- Intelligent Scanning: The engine scans for potential relationships based on component types, kinds, and subtypes (e.g., a
Deploymentlinking to aServicevia port exposure), catching logical gaps an AI might miss. - Patches and Resolvers: When a partial or a hallucinated configuration is detected, Kanvas applies patches to either propagate missing configuration or dynamically adjusts configurations to resolve conflicts, ensuring the final infrastructure-as-code export (e.g., Kubernetes manifests, Helm chart) is clean, versionable, and secure.
Turn Complexity into Clarity
Kanvas takes the guesswork out of managing modern infrastructure. For developers used to Docker Compose, it offers a natural bridge to Kubernetes and cloud services — with visibility and collaboration built in.
|
Capability |
How It Helps You |
|---|---|
|
Import and Deploy Compose Apps |
Move from Compose, Helm, or Kustomize to Kubernetes in minutes. |
|
Visual Designer |
Understand your architecture through connected, interactive diagrams. |
|
Design Catalog |
Use ready-made templates and proven infrastructure patterns. |
|
Terminal Integration |
Debug directly from the Kanvas UI, without switching tools. |
|
Sharable Views |
Collaborate on live infrastructure with your team. |
|
Multi-Environment Management |
Operate across local, staging, and cloud clusters from one dashboard. |
Kanvas brings visual design and real-time operations directly into Docker Desktop. Import your Compose files, Kubernetes Manifests, Helm Charts, and Kustomize files to explore the catalog of ready-to-use architectures, and deploy to Kubernetes in minutes — no YAML wrangling required.
Designs can also be exported in a variety of formats, including as OCI-compliant images and shared through registries like Docker Hub, GitHub Container Registry, or AWS ECR — keeping your infrastructure as design versioned and portable.
Install the Kanvas Extension from Docker Hub and start designing your infrastructure today.
-
Collabnix
- What Are Pods in Kubernetes? A Complete Guide with ExamplesKubernetes has become the de facto standard for container orchestration, and at its core lies a fundamental building block: the Pod. Whether you’re just getting started with Kubernetes or looking to deepen your understanding, mastering Pods is essential for deploying and managing containerized applications effectively. In this comprehensive guide, we’ll explore what Pods are, how […]
What Are Pods in Kubernetes? A Complete Guide with Examples
-
Collabnix
- Kubecost 3.0 is Here: A Major Architectural Shift for Kubernetes Cost ManagementKubeCost 3.0: Transforming Kubernetes Cost Management Kubernetes cost management just got a serious upgrade. The Kubecost Helm Chart repository, which provides the templates for developing this enterprise-grade application to monitor and manage Kubernetes spend, has released version 3.0, introducing fundamental changes aimed at improving performance, granularity, and efficiency. If you are currently running Kubecost 2.x, […]
Kubecost 3.0 is Here: A Major Architectural Shift for Kubernetes Cost Management
-
Collabnix
- What is Kagent and Why Should DevOps Engineers Care?In the rapidly evolving landscape of cloud-native infrastructure, Kagent emerges as the first open-source agentic AI framework purpose-built for Kubernetes environments. Developed by Solo.io and contributed to the Cloud Native Computing Foundation (CNCF), Kagent represents a paradigm shift from traditional automation to autonomous, reasoning-capable systems that can independently diagnose, troubleshoot, and resolve complex operational challenges […]
What is Kagent and Why Should DevOps Engineers Care?
-
Collabnix
- When to Switch from Serverless Architecture to KubernetesUnderstanding Serverless Architecture and Its Limits The Serverless vs Kubernetes debate isn’t about which technology is superior—it’s about finding the right tool for your specific problem. As applications mature and scale, many teams face a critical question: when does it make sense to migrate from serverless functions to a Kubernetes-based architecture? Let’s cut through the […]
When to Switch from Serverless Architecture to Kubernetes
-
Collabnix
- Running AI Workloads on Kubernetes in 2025Optimizing AI Workloads on Kubernetes in 2025 If you’ve been keeping an eye on the cloud-native world, you’ve probably noticed how AI is shaking things up big time. As we roll into late 2025, one of the hottest trends in Kubernetes is its tight integration with AI workloads. We’re talking about everything from training massive […]
Running AI Workloads on Kubernetes in 2025
-
Collabnix
- Hands-On Kubeflow Tutorial: Deploy Your First ML Pipeline on KubernetesComprehensive Kubeflow Tutorial for ML Pipelines Kubeflow is no longer “nice-to-have” — it’s the MLOps engine powering 90% of production AI on Kubernetes. But most tutorials stop at “Hello World.”We don’t. In 45 minutes, you’ll: Step 1: Set Up Local Kubernetes + Kubeflow Verify: Access UI at http://localhost:8080 via port-forward: Step 2: Write the ML […]
Hands-On Kubeflow Tutorial: Deploy Your First ML Pipeline on Kubernetes
-
Docker
- Docker Desktop 4.50: Indispensable for Daily Development Docker Desktop 4.50 represents a major leap forward in how development teams build, secure, and ship software. Across the last several releases, we’ve delivered meaningful improvements that directly address the challenges you face every day: faster debugging workflows, enterprise-grade security controls that don’t get in your way, and seamless AI integration that makes modern development accessible to every team member. Whether you’re debugging a build failure at 2 AM, managing security polic
Docker Desktop 4.50: Indispensable for Daily Development
Docker Desktop 4.50 represents a major leap forward in how development teams build, secure, and ship software. Across the last several releases, we’ve delivered meaningful improvements that directly address the challenges you face every day: faster debugging workflows, enterprise-grade security controls that don’t get in your way, and seamless AI integration that makes modern development accessible to every team member.
Whether you’re debugging a build failure at 2 AM, managing security policies across distributed teams, or leveraging AI capabilities to build your applications, Docker Desktop delivers clear, real-world value that keeps your workflows moving and your infrastructure secure.
Accelerating Daily Development: Productivity and Control for Every Developer
Modern development teams face mounting pressures: complex multi-service applications, frequent context switching between tools, inconsistent local environments, and the constant need to balance productivity with security and governance requirements. For principal engineers managing these challenges, the friction of daily development workflows can significantly impact team velocity and code quality.
Docker Desktop addresses these challenges head-on by delivering seamless experiences that eliminate friction and giving organizations the control necessary to maintain security and compliance without slowing teams down.
Seamless Developer Experiences
Docker Debug is now free for all users, removing barriers to troubleshooting and making it easier for every developer on your team to diagnose issues quickly. The enhanced IDE integration goes deeper than ever before: the Dockerfile debugger in the VSCode Extension enables developers to step through build processes directly within their familiar editing environment, reducing the cognitive overhead of switching between tools. Whether you’re using VSCode, Cursor, or other popular editors, Docker Desktop integrates naturally into your existing workflow. For Windows-based enterprises, Docker Desktop’s ongoing engineering investments are delivering significant stability improvements with WSL2 integration, ensuring consistent performance for development teams at scale.
Getting applications from local development to production environments requires reducing the gap between how developers work locally and how applications run at scale. Compose to Kubernetes capabilities enable teams to translate local multi-service applications into production-ready Kubernetes deployments, while cagent provides a toolkit for running and developing agents that simplifies the development process. Whether you’re orchestrating containerized microservices or developing agentic AI workflows, Docker Desktop accelerates the path from experimentation to production deployment.
Enterprise-Level Control and Governance
For organizations requiring centralized management, Docker Desktop delivers enterprise-grade capabilities that maintain security without sacrificing developer autonomy. Administrators can set proxy settings via macOS configuration profiles, and can specify PAC files and Embedded PAC scripts with installer flags for macOS and Windows Docker, ensuring corporate network policies are automatically enforced during deployment without requiring manual developer configuration, further extending enterprise policy enforcement.
A faster release cadence with continuous updates ensures every developer runs the latest stable version with critical security patches, eliminating the traditional tension between IT requirements and developer productivity. The Kubernetes Dashboard is now part of the left navigation, making it easier to find and use.
Kind (k8s) Enterprise Support brings production-grade Kubernetes tooling to local development, enabling teams to test complex orchestration scenarios before deployment.
Figure 1: K8 Settings
Together, these capabilities build on Docker Desktop’s position as the foundation for modern development, adding enterprise-grade management that scales with your organization’s needs. You get the visibility and control that enterprise architecture teams require while preserving the speed and flexibility that keeps developers productive.
Securing Container Workloads: Enterprise-Grade Protection Without Sacrificing Speed
As containerized applications move from development to production and AI workloads proliferate across enterprises, security teams face a critical challenge: how do you enforce rigorous security controls without creating bottlenecks that slow development velocity? Traditional approaches often force organizations to choose between security and speed, but that’s a false choice that puts both innovation and infrastructure at risk.
Docker Desktop’s recent releases address this tension directly, delivering enterprise-grade security controls that operate transparently within developer workflows. These aren’t afterthought features; they’re foundational protections designed to give security and platform teams confidence at scale while keeping developers productive.
Granular Control Over Container Behavior
Enforce Local Port Bindings prevents services running in Docker Desktop from being exposed across the local network, ensuring developers maintain network isolation during local development while retaining full functionality. For teams in regulated industries where network segmentation requirements extend to development environments, this capability helps maintain compliance standards without disrupting developer workflows.
Building on Secure Foundations
These runtime protections work in tandem with secure container foundations. Docker’s new Hardened Images, secure, minimal, production-ready container images maintained by Docker with near-zero CVEs and enterprise SLA backing. Recent updates introduced unlimited catalog pricing and the addition of Helm charts to the catalog. We also outlined Docker’s five pillars for Software Supply Chain Security, delivering transparency and eliminating the endless CVE remediation cycle. While Hardened Images are available as a separate add-on, they’re purpose-built to extend the secure-by-default foundation that Docker Desktop provides, giving teams a comprehensive approach to container security from development through production.
Seamless Enterprise Policy Integrations
The Docker CLI now gracefully handles certificates issued by non-conforming certificate authorities (CAs) that use negative serial numbers. While the X.509 standard specifies that certificate serial numbers must be positive, some enterprise PKI systems still produce certificates that violate this rule. Previously, organizations had to choose between adhering to their CA configuration and maintaining Docker compatibility, a frustrating trade-off that often led to insecure workarounds. Now, Docker Desktop works seamlessly with enterprise certificate infrastructure, ensuring developers can authenticate to private registries without security teams compromising their PKI standards.
These improvements reflect Docker’s commitment to being secure by default. Rather than treating security as a feature developers must remember to enable, Docker Desktop builds protection into the platform itself, giving enterprises the confidence to scale container adoption while maintaining the developer experience that drives innovation.
Unlocking AI Development: Making Model Context Protocol (MCP)Accessible for Every Developer
As AI-native development becomes central to modern software engineering, developers face a critical challenge: integrating AI capabilities into their workflows shouldn’t require extensive configuration knowledge or create friction that slows teams down. The Model Context Protocol (MCP) offers powerful capabilities for connecting AI agents to development tools and data sources, but accessing and managing these integrations has historically been complex, creating barriers to adoption, especially for teams with varying technical expertise.
Docker is addressing these challenges directly by making MCP integration seamless and secure within Docker Desktop.
Guided Onboarding Through Learning Center and MCP Toolkit Walkthroughs and Improved MCP Server Discovery
Understanding that accessibility drives adoption, Docker has introduced a redesigned onboarding experience through the Learning Center. The new MCP Toolkit Walkthroughs guide teams through complex setup processes step-by-step, ensuring that engineers of all skill levels can confidently adopt AI-powered workflows. Further, Docker’s MCP Server Discovery feature simplifies discovery by enabling developers to search, filter, and sort available MCP servers efficiently. By eliminating the knowledge barriers and frictions around discovery, these improvements accelerate time to productivity and help organizations scale AI development practices across their teams.
Expanded Catalog: 270+ MCP Servers and Growing
The Docker MCP Catalog now includes over 270 MCP servers, with support for more than 60 remote servers. We’ve also added one-click connections for popular clients like Claude Code and Codex, making it easier than ever to supercharge your AI coding agents with powerful MCP tools. Getting started takes just a few clicks.
Remote MCP Server Support with Built-In OAuth
Connecting to MCP servers has traditionally meant dealing with manual tokens, fragile config files, and scattered credential management. It’s frustrating, especially for developers new to these workflows, who often don’t know where to find the right credentials in third-party tools. With the latest update to the Docker MCP Toolkit, developers can now securely connect to 60+ remote MCP servers, including Notion and Linear, using built-in OAuth support. This update goes beyond convenience; it lays the foundation for a more connected, intelligent, and automated developer experience, all within Docker Desktop. Read more about connecting to remote MCP servers.
Figure 2: Docker MCP Toolkit now supports remote MCP Servers with OAuth built-in
Smarter, More Efficient, and More Capable Agents with Dynamic MCPs
In this release, we’re introducing dynamic MCPs, a major step forward in enabling AI agents to discover, configure, and compose tools autonomously. Previously, integrating MCP servers required manual setup and static configurations. Now, with new features like Smart Search and Tool Composition, agents can search the MCP Catalog, pull only the tools they need, and even generate code to compose multi-tool workflows, all within a secure, sandboxed environment. These enhancements not only increase agent autonomy but also improve performance by reducing token usage and minimizing context bloat. Ultimately, this leads to less context switching and more focused time for developers. Read more about dynamic MCPs.
Together, these advancements represent Docker’s commitment to making AI-native development accessible and practical for development teams of any size.
Conclusion: Committed to Your Development Success
The innovations across Docker Desktop 4.45 through 4.50 reinforce our commitment to being the development solution teams rely on every day, for every workflow, at any scale.
We’ve made daily development faster and more integrated, with free debugging tools, native IDE support, and enterprise governance that actually works. We’ve strengthened security with controls that protect your infrastructure without creating bottlenecks. And we’ve made AI development accessible, turning complex integrations into guided experiences that accelerate your team’s capabilities. The impact is measurable. Independent research from theCUBE found that Docker Desktop users achieve 50% faster build times and reclaim 10-40+ hours per developer each month, time that goes directly back into innovation
This is Docker Desktop operating as your indispensable foundation: giving developers the tools they need to stay productive, giving security teams the controls they need to stay protected, and giving organizations the confidence they need to innovate at scale.
As we continue our accelerated release cadence, expect Docker to keep delivering the features that matter most to how you build, ship, and run modern applications. We’re committed to being the solution you can count on today and as your needs evolve.
Upgrade to the latest Docker Desktop now →
Learn more
- Subscribe to the Docker Navigator Newsletter
- Read theCUBE research report
- Explore the MCP Catalog: Discover containerized, security-hardened MCP servers
- Explore cagent and give it a
to follow along as it evolves - New to Docker? Create an account.
- Have questions? The Docker community is here to help.
-
Collabnix
- Kubernetes Operators for ML: Complete CRD Implementation GuideMaster Kubernetes Operators for ML workloads. Complete guide to Custom Resource Definitions, controller implementation, and best practices with code examples.
Kubernetes Operators for ML: Complete CRD Implementation Guide
-
Collabnix
- Multi-Agent Orchestration: Patterns and Best Practices for 2024Master multi-agent orchestration with proven patterns, code examples, and best practices. Learn orchestration frameworks, deployment strategies, and troubleshooting.
Multi-Agent Orchestration: Patterns and Best Practices for 2024
-
Collabnix
- Fine-Tuning Open Source LLMs: Complete Infrastructure Guide 2024Master LLM fine-tuning infrastructure with Kubernetes, GPU optimization, and distributed training. Includes YAML configs, troubleshooting, and cost optimization.
Fine-Tuning Open Source LLMs: Complete Infrastructure Guide 2024
-
Collabnix
- Building LLM Evaluation Pipelines on Kubernetes: A Complete GuideLearn to build production-grade LLM evaluation pipelines on Kubernetes with practical YAML configs, code examples, and best practices for scalable AI/ML workflows.
Building LLM Evaluation Pipelines on Kubernetes: A Complete Guide
-
Collabnix
- Claude and Autonomous Agents: Practical Implementation GuideLearn to build production-ready autonomous agents with Claude AI. Complete guide with code examples, Kubernetes deployment, and best practices for DevOps.
Claude and Autonomous Agents: Practical Implementation Guide
-
Collabnix
- Ollama API Integration: Building Production-Ready LLM ApplicationsLearn to build production-ready LLM applications with Ollama API. Complete guide with Python examples, Kubernetes deployment, and performance optimization tips.
Ollama API Integration: Building Production-Ready LLM Applications
-
Collabnix
- Autoscaling AI Workloads: HPA and KEDA for ML ApplicationsMaster autoscaling for AI/ML workloads on Kubernetes using HPA and KEDA. Complete guide with YAML configs, code examples, and production best practices.