Vue lecture
Docker Scout Tutorial: Build Secure Container Images
Docker Desktop 4.42: Native IPv6, Built-In MCP, and Better Model Packaging
Docker Desktop 4.42 introduces powerful new capabilities that enhance network flexibility, improve security, and deepen AI toolchain integration, all while reducing setup friction. With native IPv6 support, a fully integrated MCP Toolkit, and major upgrades to Docker Model Runner and our AI agent Gordon, this release continues our commitment to helping developers move faster, ship smarter, and build securely across any environment. Whether you’re managing enterprise-grade networks or experimenting with agentic workflows, Docker Desktop 4.42 brings the tools you need right into your development workflows.

IPv6 support
Docker Desktop now provides IPv6 networking capabilities with customization options to better support diverse network environments. You can now choose between dual IPv4/IPv6 (default), IPv4-only, or IPv6-only networking modes to align with your organization’s network requirements. The new intelligent DNS resolution behavior automatically detects your host’s network stack and filters unsupported record types, preventing connectivity timeouts in IPv4-only or IPv6-only environments.
These ipv6 settings are available in Docker Desktop Settings > Resources > Network section and can be enforced across teams using Settings Management, making Docker Desktop more reliable in complex enterprise network configurations including IPv6-only deployments.

Figure 1: Docker Desktop IPv6 settings
Docker MCP Toolkit integrated into Docker Desktop
Last month, we launched the Docker MCP Catalog and Toolkit to help developers easily discover MCP servers and securely connect them to their favorite clients and agentic apps. We’re humbled by the incredible support from the community. User growth is up by over 50%, and we’ve crossed 1 million pulls! Now, we’re excited to share that the MCP Toolkit is built right into Docker Desktop, no separate extension required.
You can now access more than 100 MCP servers, including GitHub, MongoDB, Hashicorp, and more, directly from Docker Desktop – just enable the servers you need, configure them, and connect to clients like Claude Desktop, Cursor, Continue.dev, or Docker’s AI agent Gordon.
Unlike typical setups that run MCP servers via npx or uvx processes with broad access to the host system, Docker Desktop runs these servers inside isolated containers with well-defined security boundaries. All container images are cryptographically signed, with proper isolation of secrets and configuration data.

Figure 2: Docker MCP Toolkit is now integrated natively into Docker Desktop
To meet developers where they are, we’re bringing Docker MCP support to the CLI, using the same command structure you’re already familiar with. With the new docker mcp commands, you can launch, configure, and manage MCP servers directly from the terminal. The CLI plugin offers comprehensive functionality, including catalog management, client connection setup, and secret management.

Figure 3: Docker MCP CLI commands.
Docker AI Agent Gordon Now Supports MCP Toolkit Integration
In this release, we’ve upgraded Gordon, Docker’s AI agent, with direct integration to the MCP Toolkit in Docker Desktop. To enable it, open Gordon, click the “Tools” button, and toggle on the “MCP” Toolkit option. Once activated, the MCP Toolkit tab will display tools available from any MCP servers you’ve configured.

Figure 4: Docker’s AI Agent Gordon now integrates with Docker’s MCP Toolkit, bringing 100+ MCP servers
This integration gives you immediate access to 100+ MCP servers with no extra setup, letting you experiment with AI capabilities directly in your Docker workflow. Gordon now acts as a bridge between Docker’s native tooling and the broader AI ecosystem, letting you leverage specialized tools for everything from screenshot capture to data analysis and API interactions – all from a consistent, unified interface.

Figure 5: Docker’s AI Agent Gordon uses the GitHub MCP server to pull issues and suggest solutions.
Finally, we’ve also improved the Dockerize feature with expanded support for Java, Kotlin, Gradle, and Maven projects. These improvements make it easier to containerize a wider range of applications with minimal configuration. With expanded containerization capabilities and integrated access to the MCP Toolkit, Gordon is more powerful than ever. It streamlines container workflows, reduces repetitive tasks, and gives you access to specialized tools, so you can stay focused on building, shipping, and running your applications efficiently.
Docker Model Runner adds Qualcomm support, Docker Engine Integration, and UX Upgrades
Staying true to our philosophy of giving developers more flexibility and meeting them where they are, the latest version of Docker Model Runner adds broader OS support, deeper integration with popular Docker tools, and improvements in both performance and usability.
In addition to supporting Apple Silicon and Windows systems with NVIDIA GPUs, Docker Model Runner now works on Windows devices with Qualcomm chipsets. Under the hood, we’ve upgraded our inference engine to use the latest version of llama.cpp, bringing significantly enhanced tool calling capabilities to your AI applications.Docker Model Runner can now be installed directly in Docker Engine Community Edition across multiple Linux distributions supported by Docker Engine. This integration is particularly valuable for developers looking to incorporate AI capabilities into their CI/CD pipelines and automated testing workflows. To get started, check out our documentation for the setup guide.
Get Up and Running with Models Faster
The Docker Model Runner user experience has been upgraded with expanded GUI functionality in Docker Desktop. All of these UI enhancements are designed to help you get started with Model Runner quickly and build applications faster. A dedicated interface now includes three new tabs that simplify model discovery, management, and streamline troubleshooting workflows. Additionally, Docker Desktop’s updated GUI introduces a more intuitive onboarding experience with streamlined “two-click” actions.
After clicking on the Model tab, you’ll see three new sub-tabs. The first, labeled “Local,” displays a set of models in various sizes that you can quickly pull. Once a model is pulled, you can launch a chat interface to test and experiment with it immediately.

Figure 6: Access a set of models of various sizes to get quickly started in Models menu of Docker Desktop
The second tab ”Docker Hub” offers a comprehensive view for browsing and pulling models from Docker Hub’s AI Catalog, making it easy to get started directly within Docker Desktop, without switching contexts.

Figure 7: A shortcut to the Model catalog from Docker Hub in Models menu of Docker Desktop
The third tab “Logs” offers real-time access to the inference engine’s log tail, giving developers immediate visibility into model execution status and debugging information directly within the Docker Desktop interface.

Figure 8: Gain visibility into model execution status and debugging information in Docker Desktop
Model Packaging Made Simple via CLI
As part of the Docker Model CLI, the most significant enhancement is the introduction of the docker model package command. This new command enables developers to package their models from GGUF format into OCI-compliant artifacts, fundamentally transforming how AI models are distributed and shared. It enables seamless publishing to both public and private and OCI-compatible repositories such as Docker Hub and establishes a standardized, secure workflow for model distribution, using the same trusted Docker tools developers already rely on. See our docs for more details.
Conclusion
From intelligent networking enhancements to seamless AI integrations, Docker Desktop 4.42 makes it easier than ever to build with confidence. With native support for IPv6, in-app access to 100+ MCP servers, and expanded platform compatibility for Docker Model Runner, this release is all about meeting developers where they are and equipping them with the tools to take their work further. Update to the latest version today and unlock everything Docker Desktop 4.42 has to offer.
Learn more
- Authenticate and update today to receive your subscription level’s newest Docker Desktop features.
- Subscribe to the Docker Navigator Newsletter.
- Learn about our sign-in enforcement options.
- New to Docker? Create an account.
- Have questions? The Docker community is here to help.
Docker Desktop 4.41: Docker Model Runner supports Windows, Compose, and Testcontainers integrations, Docker Desktop on the Microsoft Store
Big things are happening in Docker Desktop 4.41! Whether you’re building the next AI breakthrough or managing development environments at scale, this release is packed with tools to help you move faster and collaborate smarter. From bringing Docker Model Runner to Windows (with NVIDIA GPU acceleration!), Compose and Testcontainers, to new ways to manage models in Docker Desktop, we’re making AI development more accessible than ever. Plus, we’ve got fresh updates for your favorite workflows — like a new Docker DX Extension for Visual Studio Code, a speed boost for Mac users, and even a new location for Docker Desktop on the Microsoft Store. Also, we’re enabling ACH transfer as a payment option for self-serve customers. Let’s dive into what’s new!

Docker Model Runner now supports Windows, Compose & Testcontainers
This release brings Docker Model Runner to Windows users with NVIDIA GPU support. We’ve also introduced improvements that make it easier to manage, push, and share models on Docker Hub and integrate with familiar tools like Docker Compose and Testcontainers. Docker Model Runner works with Docker Compose projects for orchestrating model pulls and injecting model runner services, and Testcontainers via its libraries. These updates continue our focus on helping developers build AI applications faster using existing tools and workflows.
In addition to CLI support for managing models, Docker Desktop now includes a dedicated “Models” section in the GUI. This gives developers more flexibility to browse, run, and manage models visually, right alongside their containers, volumes, and images.

Figure 1: Easily browse, run, and manage models from Docker Desktop
Further extending the developer experience, you can now push models directly to Docker Hub, just like you would with container images. This creates a consistent, unified workflow for storing, sharing, and collaborating on models across teams. With models treated as first-class artifacts, developers can version, distribute, and deploy them using the same trusted Docker tooling they already use for containers — no extra infrastructure or custom registries required.
docker model push <model>
The Docker Compose integration makes it easy to define, configure, and run AI applications alongside traditional microservices within a single Compose file. This removes the need for separate tools or custom configurations, so teams can treat models like any other service in their dev environment.

Figure 2: Using Docker Compose to declare services, including running AI models
Similarly, the Testcontainers integration extends testing to AI models, with initial support for Java and Go and more languages on the way. This allows developers to run applications and create automated tests using AI services powered by Docker Model Runner. By enabling full end-to-end testing with Large Language Models, teams can confidently validate application logic, their integration code, and drive high-quality releases.
String modelName = "ai/gemma3"; DockerModelRunnerContainer modelRunnerContainer = new DockerModelRunnerContainer() .withModel(modelName); modelRunnerContainer.start(); OpenAiChatModel model = OpenAiChatModel.builder() .baseUrl(modelRunnerContainer.getOpenAIEndpoint()) .modelName(modelName) .logRequests(true) .logResponses(true) .build(); String answer = model.chat("Give me a fact about Whales."); System.out.println(answer);
Docker DX Extension in Visual Studio: Catch issues early, code with confidence
The Docker DX Extension is now live on the Visual Studio Marketplace. This extension streamlines your container development workflow with rich editing, linting features, and built-in vulnerability scanning. You’ll get inline warnings and best-practice recommendations for your Dockerfiles, powered by Build Check — a feature we introduced last year.
It also flags known vulnerabilities in container image references, helping you catch issues early in the dev cycle. For Bake files, it offers completion, variable navigation, and inline suggestions based on your Dockerfile stages. And for those managing complex Docker Compose setups, an outline view makes it easier to navigate and understand services at a glance.

Figure 3: Docker DX Extension in Visual Studio provides actionable recommendations for fixing vulnerabilities and optimizing Dockerfiles
Read more about this in our announcement blog and GitHub repo. Get started today by installing Docker DX – Visual Studio Marketplace
MacOS QEMU virtualization option deprecation
The QEMU virtualization option in Docker Desktop for Mac will be deprecated on July 14, 2025.
With the new Apple Virtualization Framework, you’ll experience improved performance, stability, and compatibility with macOS updates as well as tighter integration with Apple Silicon architecture.
What this means for you:
- If you’re using QEMU as your virtualization backend on macOS, you’ll need to switch to either Apple Virtualization Framework (default) or Docker VMM (beta) options.
- This does NOT affect QEMU’s role in emulating non-native architectures for multi-platform builds.
- Your multi-architecture builds will continue to work as before.
For complete details, please see our official announcement.
Introducing Docker Desktop in the Microsoft Store
Docker Desktop is now available for download from the Microsoft Store! We’re rolling out an EXE-based installer for Docker Desktop on Windows. This new distribution channel provides an enhanced installation and update experience for Windows users while simplifying deployment management for IT administrators across enterprise environments.
Key benefits
For developers:
- Automatic Updates: The Microsoft Store handles all update processes automatically, ensuring you’re always running the latest version without manual intervention.
- Streamlined Installation: Experience a more reliable setup process with fewer startup errors.
- Simplified Management: Manage Docker Desktop alongside your other applications in one familiar interface.
For IT admins:
- Native Intune MDM Integration: Deploy Docker Desktop across your organization with Microsoft’s native management tools.
- Centralized Deployment Control: Roll out Docker Desktop more easily through the Microsoft Store’s enterprise distribution channels.
- Automatic Updates Regardless of Security Settings: Updates are handled automatically by the Microsoft Store infrastructure, even in organizations where users don’t have direct store access.
- Familiar Process: The update mechanism maps to the widget command, providing consistency with other enterprise software management tools.
This new distribution option represents our commitment to improving the Docker experience for Windows users while providing enterprise IT teams with the management capabilities they need.
Unlock greater flexibility: Enable ACH transfer as a payment option for self-serve customers
We’re focused on making it easier for teams to scale, grow, and innovate. All on their own terms. That’s why we’re excited to announce an upgrade to the self-serve purchasing experience: customers can pay via ACH transfer starting on 4/30/25.
Historically, self-serve purchases were limited to credit card payments, forcing many customers who could not use credit cards into manual sales processes, even for small seat expansions. With the introduction of an ACH transfer payment option, customers can choose the payment method that works best for their business. Fewer delays and less unnecessary friction.
This payment option upgrade empowers customers to:
- Purchase more independently without engaging sales
- Choose between credit card or ACH transfer with a verified bank account
By empowering enterprises and developers, we’re freeing up your time, and ours, to focus on what matters most: building, scaling, and succeeding with Docker.
Visit our documentation to explore the new payment options, or log in to your Docker account to get started today!
Wrapping up
With Docker Desktop 4.41, we’re continuing to meet developers where they are — making it easier to build, test, and ship innovative apps, no matter your stack or setup. Whether you’re pushing AI models to Docker Hub, catching issues early with the Docker DX Extension, or enjoying faster virtualization on macOS, these updates are all about helping you do your best work with the tools you already know and love. We can’t wait to see what you build next!
Learn more
- Authenticate and update today to receive your subscription level’s newest Docker Desktop features.
- Subscribe to the Docker Navigator Newsletter.
- Learn about our sign-in enforcement options.
- New to Docker? Create an account.
- Have questions? The Docker community is here to help.
8 Ways to Empower Engineering Teams to Balance Productivity, Security, and Innovation
This post was contributed by Lance Haig, a solutions engineer at Docker.
In today’s fast-paced development environments, balancing productivity with security while rapidly innovating is a constant juggle for senior leaders. Slow feedback loops, inconsistent environments, and cumbersome tooling can derail progress. As a solutions engineer at Docker, I’ve learned from my conversations with industry leaders that a key focus for senior leaders is on creating processes and providing tools that let developers move faster without compromising quality or security.
Let’s explore how Docker’s suite of products and Docker Business empowers industry leaders and their development teams to innovate faster, stay secure, and deliver impactful results.
1. Create a foundation for reliable workflows
A recurring pain point I’ve heard from senior leaders is the delay between code commits and feedback. One leader described how their team’s feedback loops stretched to eight hours, causing delays, frustration, and escalating costs.
Optimizing feedback cycles often involves localizing testing environments and offloading heavy build tasks. Teams leveraging containerized test environments — like Testcontainers Cloud — reduce this feedback loop to minutes, accelerating developer output. Similarly, offloading complex builds to managed cloud services ensures infrastructure constraints don’t block developers. The time saved here is directly reinvested in faster iteration cycles.
Incorporating Docker’s suite of products can significantly enhance development efficiency by reducing feedback loops. For instance, The Warehouse Group, New Zealand’s largest retail chain, transformed its development process by adopting Docker. This shift enabled developers to test applications locally, decreasing feedback loops from days to minutes. Consequently, deployments that previously took weeks were streamlined to occur within an hour of code submission.
2. Shorten feedback cycles to drive results
Inconsistent development environments continue to plague engineering organizations. These mismatches lead to wasted time troubleshooting “works-on-my-machine” errors or inefficiencies across CI/CD pipelines. Organizations achieve consistent environments across local, staging, and production setups by implementing uniform tooling, such as Docker Desktop.
For senior leaders, the impact isn’t just technical: predictable workflows simplify onboarding, reduce new hires’ time to productivity, and establish an engineering culture focused on output rather than firefighting.
For example, Ataccama, a data management company, leveraged Docker to expedite its deployment process. With containerized applications, Ataccama reduced application deployment lead times by 75%, achieving a 50% faster transition from development to production. By reducing setup time and simplifying environment configuration, Docker allows the team to spin up new containers instantly and shift focus to delivering value. This efficiency gain allowed the team to focus more on delivering value and less on managing infrastructure.
3. Empower teams to collaborate in distributed workflows
Today’s hybrid and remote workforces make developer collaboration more complex. Secure, pre-configured environments help eliminate blockers when working across teams. Leaders who adopt centralized, standardized configurations — even in zero-trust environments — reduce setup time and help teams remain focused.
Docker Build Cloud further simplifies collaboration in distributed workflows by enabling developers to offload resource-intensive builds to a secure, managed cloud environment. Teams can leverage parallel builds, shared caching, and multi-architecture support to streamline workflows, ensuring that builds are consistent and fast across team members regardless of their location or platform. By eliminating the need for complex local build setups, Docker Build Cloud allows developers to focus on delivering high-quality code, not managing infrastructure.
Beyond tools, fostering collaboration requires a mix of practices: sharing containerized services, automating repetitive tasks, and enabling quick rollbacks. The right combination allows engineering teams to align better, focus on goals, and deliver outcomes quickly.
Empowering engineering teams with streamlined workflows and collaborative tools is only part of the equation. Leaders must also evaluate how these efficiencies translate into tangible cost savings, ensuring their investments drive measurable business value.
To learn more about how Docker simplifies the complex, read From Legacy to Cloud-Native: How Docker Simplifies Complexity and Boosts Developer Productivity.
4. Reduce costs
Every organization feels pressured to manage budgets effectively while delivering on demanding expectations. However, leaders can realize cost savings in unexpected areas, including hiring, attrition, and infrastructure optimization, by adopting consumption-based pricing models, streamlining operations, and leveraging modern tooling.
Easy access to all Docker products provides flexibility and scalability
Updated Docker plans make it easier for development teams to access everything they need under one subscription. Consumption is included for each new product, and more can be added as needed. This allows organizations to scale resources as their needs evolve and effectively manage their budgets.
Cost savings through streamlined operations
Organizations adopting Docker Business have reported significant reductions in infrastructure costs. For instance, a leading beauty company achieved a 25% reduction in infrastructure expenses by transitioning to a container-first development approach with Docker.
Bitso, a leading financial services company powered by cryptocurrency, switched to Docker Business from an alternative solution and reduced onboarding time from two weeks to a few hours per engineer, saving an estimated 7,700 hours in the eight months while scaling the team. Returning to Docker after spending almost two years with the alternative open-source solution proved more cost-effective, decreasing the time spent onboarding, troubleshooting, and debugging. Further, after transitioning back to Docker, Bitso has experienced zero new support tickets related to Docker, significantly reducing the platform support burden.
Read the Bitso case study to learn why Bitso returned to Docker Business.
Reducing infrastructure costs with modern tooling
Organizations that adopt Docker’s modern tooling realize significant infrastructure cost savings by optimizing resource usage, reducing operational overhead, and eliminating inefficiencies tied to legacy processes.
By leveraging Docker Build Cloud, offloading resource-intensive builds to a managed cloud service, and leveraging shared cache, teams can achieve builds up to 39 times faster, saving approximately one hour per day per developer. For example, one customer told us they saw their overall build times improve considerably through the shared cache feature. Previously on their local machine, builds took 15-20 minutes. Now, with Docker Build Cloud, it’s down to 110 seconds — a massive improvement.
Check out our calculator to estimate your savings with Build Cloud.
5. Retain talent through frictionless environments
High developer turnover is expensive and often linked to frustration with outdated or inefficient tools. I’ve heard countless examples of developers leaving not because of the work but due to the processes and tooling surrounding it. Providing modern, efficient environments that allow experimentation while safeguarding guardrails improves satisfaction and retention.
Year after year, developers rank Docker as their favorite developer tool. For example, more than 65,000 developers participated in Stack Overflow’s 2024 Developer Survey, which recognized Docker as the most-used and most-desired developer tool for the second consecutive year, and as the most-admired developer tool.
Providing modern, efficient environments with Docker tools can enhance developer satisfaction and retention. While specific metrics vary, streamlined workflows and reduced friction are commonly cited as factors that improve team morale and reduce turnover. Retaining experienced developers not only preserves institutional knowledge but also reduces the financial burden of hiring and onboarding replacements.
6. Efficiently manage infrastructure
Consolidating development and operational tooling reduces redundancy and lowers overall IT spend. Organizations that migrate to standardized platforms see a decrease in toolchain maintenance costs and fewer internal support tickets. Simplified workflows mean IT and DevOps teams spend less time managing environments and more time delivering strategic value.
Some leaders, however, attempt to build rather than buy solutions for developer workflows, seeing it as cost-saving. This strategy carries risks: reliance on a single person or small team to maintain open-source tooling can result in technical debt, escalating costs, and subpar security. By contrast, platforms like Docker Business offer comprehensive protection and support, reducing long-term risks.
Cost management and operational efficiency go hand-in-hand with another top priority: security. As development environments grow more sophisticated, ensuring airtight security becomes critical — not just for protecting assets but also for maintaining business continuity and customer trust.
7. Secure developer environments
Security remains a top priority for all senior leaders. As organizations transition to zero-trust architectures, the role of developer workstations within this model grows. Developer systems, while powerful, are not exempt from being targets for potential vulnerabilities. Securing developer environments without stifling productivity is an ongoing leadership challenge.
Tightening endpoint security without reducing autonomy
Endpoint security starts with visibility, and Docker makes it seamless. With Image Access Management, Docker ensures that only trusted and compliant images are used throughout your development lifecycle, reducing exposure to vulnerabilities. However, these solutions are only effective if they don’t create bottlenecks for developers.
Recently, a business leader told me that taking over a team without visibility into developer environments and security revealed significant risks. Developers were operating without clear controls, exposing the organization to potential vulnerabilities and inefficiencies. By implementing better security practices and centralized oversight, the leaders improved visibility and reduced operational risks, enabling a more secure and productive environment for developer teams. This shift also addressed compliance concerns by ensuring the organization could effectively meet regulatory requirements and demonstrate policy adherence.
Securing the software supply chain
From trusted content repositories to real-time SBOM insights, securing dependencies is critical for reducing attack surfaces. In conversations with security-focused leaders, the message is clear: Supply chain vulnerabilities are both a priority and a pain point. Leaders are finding success when embedding security directly into developer workflows rather than adding it as a reactive step. Tools like Docker Scout provide real-time visibility into vulnerabilities within your software supply chain, enabling teams to address risks before they escalate.
Securing developer environments strengthens the foundation of your engineering workflows. But for many industries, these efforts must also align with compliance requirements, where visibility and control over processes can mean the difference between growth and risk.
Improving compliance
Compliance may feel like an operational requirement, but for senior leadership, it’s a strategic asset. In regulated industries, compliance enables growth. In less regulated sectors, it builds customer trust. Regardless of the driver, visibility, and control are the cornerstones of effective compliance.
Proactive compliance, not reactive audits
Audits shouldn’t feel like fire drills. Proactive compliance ensures teams stay ahead of risks and disruptions. With the right processes in place — automated logging, integrated open-source software license checks, and clear policy enforcement — audit readiness becomes a part of daily operations. This proactive approach ensures teams stay ahead of compliance risks while reducing unnecessary disruptions.
While compliance ensures a stable and trusted operational baseline, innovation drives competitive advantage. Forward-thinking leaders understand that fostering creativity within a secure and compliant framework is the key to sustained growth.
8. Accelerating innovation
Every senior leader seeks to balance operational excellence and fostering innovation. Enabling engineers to move fast requires addressing two critical tensions: reducing barriers to experimentation and providing guardrails that maintain focus.
Building a culture of safe experimentation
Experimentation thrives in environments where developers feel supported and unencumbered. By establishing trusted guardrails — such as pre-approved images and automated rollbacks — teams gain the confidence to test bold ideas without introducing unnecessary risks.
From MVP to market quickly
Reducing friction in prototyping accelerates the time-to-market for Minimum Viable Products (MVPs). Leaders prioritizing local testing environments and streamlined approval processes create conditions where engineering creativity translates directly into a competitive advantage.
Innovation is no longer just about moving fast; it’s about moving deliberately. Senior leaders must champion the tools, practices, and environments that unlock their teams’ full potential.
Unlock the full potential of your teams
As a senior leader, you have a unique position to balance productivity, security, and innovation within your teams. Reflect on your current workflows and ask: Are your developers empowered with the right tools to innovate securely and efficiently? How does your organization approach compliance and risk management without stifling creativity?
Tools like Docker Business can be a strategic enabler, helping you address these challenges while maintaining focus on your goals.
Learn more
- Docker Scout: Integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and actionable recommendations to address issues before they reach production.
- Docker Health Scores: A security grading system for container images that offers teams clear insights into their image security posture.
- Docker Hub: Access trusted, verified content, including Docker Official Images (DOI), to build secure and compliant software applications.
- Docker Official Images (DOI): A curated set of high-quality images that provide a secure foundation for containerized applications.
- Image Access Management (IAM): Enforce image-sharing policies and restrict access to sensitive components, ensuring only trusted team members access critical assets.
- Hardened Docker Desktop: A tamper-proof, enterprise-grade development environment that aligns with security standards to minimize risks from local development.
Shift-Left Testing with Testcontainers: Catching Bugs Early with Local Integration Tests
Modern software development emphasizes speed and agility, making efficient testing crucial. DORA research reveals that elite teams thrive with both high performance and reliability. They can achieve 127x faster lead times, 182x more deployments per year, 8x lower change failure rates and most impressively, 2,293x faster recovery times after incidents. The secret sauce is they “shift left.”
Shift-Left is a practice that moves integration activities like testing and security earlier in the development cycle, allowing teams to detect and fix issues before they reach production. By incorporating local and integration tests early, developers can prevent costly late-stage defects, accelerate development, and improve software quality.
In this article, you’ll learn how integration tests can help you catch defects earlier in the development inner loop and how Testcontainers can make them feel as lightweight and easy as unit tests. Finally, we’ll break down the impact that shifting left integration tests has on the development process velocity and lead time for changes according to DORA metrics.
Real-world example: Case sensitivity bug in user registration
In a traditional workflow, integration and E2E tests are often executed in the outer loop of the development cycle, leading to delayed bug detection and expensive fixes. For example, if you are building a user registration service where users enter their email addresses, you must ensure that the emails are case-insensitive and not duplicated when stored.
If case sensitivity is not handled properly and is assumed to be managed by the database, testing a scenario where users can register with duplicate emails differing only in letter case would only occur during E2E tests or manual checks. At that stage, it’s too late in the SDLC and can result in costly fixes.
By shifting testing earlier and enabling developers to spin up real services locally — such as databases, message brokers, cloud emulators, or other microservices — the testing process becomes significantly faster. This allows developers to detect and resolve defects sooner, preventing expensive late-stage fixes.
Let’s dive deep into this example scenario and how different types of tests would handle it.
Scenario
A new developer is implementing a user registration service and preparing for production deployment.
Code Example of the registerUser method
async registerUser(email: string, username: string): Promise<User> { const existingUser = await this.userRepository.findOne({ where: { email: email } }); if (existingUser) { throw new Error("Email already exists"); } ... }
The Bug
The registerUser
method doesn’t handle case sensitivity properly and relies on the database or the UI framework to handle case insensitivity by default. So, in practice, users can register duplicate emails with both lower and upper letters (e.g., user@example.com
and USER@example.com
).
Impact
- Authentication issues arise because email case mismatches cause login failures.
- Security vulnerabilities appear due to duplicate user identities.
- Data inconsistencies complicate user identity management.
Testing method 1: Unit tests.
These tests only validate the code itself, so email case sensitivity verification relies on the database where SQL queries are executed. Since unit tests don’t run against a real database, they can’t catch issues like case sensitivity.
Testing method 2: End-to-end test or manual checks.
These verifications will only catch the issue after the code is deployed to a staging environment. While automation can help, detecting issues this late in the development cycle delays feedback to developers and makes fixes more time-consuming and costly.
Testing method 3: Using mocks to simulate database interactions with Unit Tests.
One approach that could work and allow us to iterate quickly would be to mock the database layer and define a mock repository that responds with the error. Then, we could write a unit test that executes really fast:
test('should prevent registration with same email in different case', async () => { const userService = new UserRegistrationService(new MockRepository()); await userService.registerUser({ email: 'user@example.com', password: 'password123' }); await expect(userService.registerUser({ email: 'USER@example.com', password: 'password123' })) .rejects.toThrow('Email already exists'); });
In the above example, the User service is created with a mock repository that’ll hold an in-memory representation of the database, i.e. as a map of users. This mock repository will detect if a user has passed twice, probably using the username as a non-case-sensitive key, returning the expected error.
Here, we have to code the validation logic in the mock, replicating what the User service or the database should do. Whenever the user’s validation needs a change, e.g. not including special characters, we have to change the mock too. Otherwise, our tests will assert against an outdated state of the validations. If the usage of mocks is spread across the entire codebase, this maintenance could be very hard to do.
To avoid that, we consider that integration tests with real representations of the services we depend on. In the above example, using the database repository is much better than mocks, because it provides us with more confidence on what we are testing.
Testing method 4: Shift-left local integration tests with Testcontainers
Instead of using mocks, or waiting for staging to run the integration or E2E tests, we can detect the issue earlier. This is achieved by enabling developers to run the integration tests for the project locally in the developer’s inner loop, using Testcontainers with a real PostgreSQL database.
Benefits
- Time Savings: Tests run in seconds, catching the bug early.
- More Realistic Testing: Uses an actual database instead of mocks.
- Confidence in Production Readiness: Ensures business-critical logic behaves as expected.
Example integration test
First, let’s set up a PostgreSQL container using the Testcontainers library and create a userRepository to connect to this PostgreSQL instance:
let userService: UserRegistrationService; beforeAll(async () => { container = await new PostgreSqlContainer("postgres:16") .start(); dataSource = new DataSource({ type: "postgres", host: container.getHost(), port: container.getMappedPort(5432), username: container.getUsername(), password: container.getPassword(), database: container.getDatabase(), entities: [User], synchronize: true, logging: true, connectTimeoutMS: 5000 }); await dataSource.initialize(); const userRepository = dataSource.getRepository(User); userService = new UserRegistrationService(userRepository); }, 30000);
Now, with initialized userService, we can use the registerUser method to test user registration with the real PostgreSQL instance:
test('should prevent registration with same email in different case', async () => { await userService.registerUser({ email: 'user@example.com', password: 'password123' }); await expect(userService.registerUser({ email: 'USER@example.com', password: 'password123' })) .rejects.toThrow('Email already exists'); });
Why This Works
- Uses a real PostgreSQL database via Testcontainers
- Validates case-insensitive email uniqueness
- Verifies email storage format
How Testcontainers helps
Testcontainers modules provide preconfigured implementations for the most popular technologies, making it easier than ever to write robust tests. Whether your application relies on databases, message brokers, cloud services like AWS (via LocalStack), or other microservices, Testcontainers has a module to streamline your testing workflow.
With Testcontainers, you can also mock and simulate service-level interactions or use contract tests to verify how your services interact with others. Combining this approach with local testing against real dependencies, Testcontainers provides a comprehensive solution for local integration testing and eliminates the need for shared integration testing environments, which are often difficult and costly to set up and manage. To run Testcontainers tests, you need a Docker context to spin up containers. Docker Desktop ensures seamless compatibility with Testcontainers for local testing.
Testcontainers Cloud: Scalable Testing for High-Performing Teams
Testcontainers is a great solution to enable integration testing with real dependencies locally. If you want to take testing a step further — scaling Testcontainers usage across teams, monitoring images used for testing, or seamlessly running Testcontainers tests in CI — you should consider using Testcontainers Cloud. It provides ephemeral environments without the overhead of managing dedicated test infrastructure. Using Testcontainers Cloud locally and in CI ensures consistent testing outcomes, giving you greater confidence in your code changes. Additionally, Testcontainers Cloud allows you to seamlessly run integration tests in CI across multiple pipelines, helping to maintain high-quality standards at scale. Finally, Testcontainers Cloud is more secure and ideal for teams and enterprises who have more stringent requirements for containers’ security mechanisms.
Measuring the business impact of shift-left testing
As we have seen, shift-left testing with Testcontainers significantly improves defect detection rate and time and reduces context switching for developers. Let’s take the example above and compare different production deployment workflows and how early-stage testing would impact developer productivity.
Traditional workflow (shared integration environment)
Process breakdown:
The traditional workflow comprises writing feature code, running unit tests locally, committing changes, and creating pull requests for the verification flow in the outer loop. If a bug is detected in the outer loop, developers have to go back to their IDE and repeat the process of running the unit test locally and other steps to verify the fix.

Figure 1: Workflow of a traditional shared integration environment broken down by time taken for each step.
Lead Time for Changes (LTC): It takes at least 1 to 2 hours to discover and fix the bug (more depending on CI/CD load and established practices). In the best-case scenario, it would take approximately 2 hours from code commit to production deployment. In the worst-case scenario, it may take several hours or even days if multiple iterations are required.
Deployment Frequency (DF) Impact: Since fixing a pipeline failure can take around 2 hours and there’s a daily time constraint (8-hour workday), you can realistically deploy only 3 to 4 times per day. If multiple failures occur, deployment frequency can drop further.
Additional associated costs: Pipeline workers’ runtime minutes and Shared Integration Environment maintenance costs.
Developer Context Switching: Since bug detection occurs about 30 minutes after the code commit, developers lose focus. This leads to an increased cognitive load after they have to constantly context switch, debug, and then context switch again.
Shift-left workflow (local integration testing with Testcontainers)
Process breakdown:
The shift-left workflow is much simpler and starts with writing code and running unit tests. Instead of running integration tests in the outer loop, developers can run them locally in the inner loop to troubleshoot and fix issues. The changes are verified again before proceeding to the next steps and the outer loop.

Figure 2: Shift-Left Local Integration Testing with Testcontainers workflow broken down by time taken for each step. The feedback loop is much faster and saves developers time and headaches downstream.
Lead Time for Changes (LTC): It takes less than 20 minutes to discover and fix the bug in the developers’ inner loop. Therefore, local integration testing enables at least 65% faster defect identification than testing on a Shared Integration Environment.
Deployment Frequency (DF) Impact: Since the defect was identified and fixed locally within 20 minutes, the pipeline would run to production, allowing for 10 or more deployments daily.
Additional associated costs: 5 Testcontainers Cloud minutes are consumed.
Developer Context Switching: No context switching for the developer, as tests running locally provide immediate feedback on code changes and let the developer stay focused within the IDE and in the inner loop.
Key Takeaways
Traditional Workflow (Shared Integration Environment) | Shift-Left Workflow (Local Integration Testing with Testcontainers) | Improvements and further references | |
Faster Lead Time for Changes (LTC | Code changes validated in hours or days. Developers wait for shared CI/CD environments. | Code changes validated in minutes. Testing is immediate and local. | >65% Faster Lead Time for Changes (LTC) – Microsoft reduced lead time from days to hours by adopting shift-left practices. |
Higher Deployment Frequency (DF) | Deployment happens daily, weekly, or even monthly due to slow validation cycles. | Continuous testing allows multiple deployments per day. | 2x Higher Deployment Frequency – 2024 DORA Report shows shift-left practices more than double deployment frequency. Elite teams deploy 182x more often. |
Lower Change Failure Rate (CFR) | Bugs that escape into production can lead to costly rollbacks and emergency fixes. | More bugs are caught earlier in CI/CD, reducing production failures. | Lower Change Failure Rate – IBM’s Systems Sciences Institute estimates defects found in production cost 15x more to fix than those caught early. |
Faster Mean Time to Recovery (MTTR) | Fixes take hours, days, or weeks due to complex debugging in shared environments. | Rapid bug resolution with local testing. Fixes verified in minutes. | Faster MTTR—DORA’s elite performers restore service in less than one hour, compared to weeks to a month for low performers. |
Cost Savings | Expensive shared environments, slow pipeline runs, high maintenance costs. | Eliminates costly test environments, reducing infrastructure expenses. | Significant Cost Savings – ThoughtWorks Technology Radar highlights shared integration environments as fragile and expensive. |
Table 1: Summary of key metrics improvement by using shifting left workflow with local testing using Testcontainers
Conclusion
Shift-left testing improves software quality by catching issues earlier, reducing debugging effort, enhancing system stability, and overall increasing developer productivity. As we’ve seen, traditional workflows relying on shared integration environments introduce inefficiencies, increasing lead time for changes, deployment delays, and cognitive load due to frequent context switching. In contrast, by introducing Testcontainers for local integration testing, developers can achieve:
- Faster feedback loops – Bugs are identified and resolved within minutes, preventing delays.
- More reliable application behavior – Testing in realistic environments ensures confidence in releases.
- Reduced reliance on expensive staging environments – Minimizing shared infrastructure cuts costs and streamlines the CI/CD process.
- Better developer flow state – Easily setting up local test scenarios and re-running them fast for debugging helps developers stay focused on innovation.
Testcontainers provides an easy and efficient way to test locally and catch expensive issues earlier. To scale across teams, developers can consider using Docker Desktop and Testcontainers Cloud to run unit and integration tests locally, in the CI, or ephemeral environments without the complexity of maintaining dedicated test infrastructure. Learn more about Testcontainers and Testcontainers Cloud in our docs.
Further Reading
- Sign up for a Testcontainers Cloud account.
- Follow the guide: Mastering Testcontainers Cloud by Docker: streamlining integration testing with containers
- Connect on the Testcontainers Slack.
- Get started with the Testcontainers guide.
- Learn about Testcontainers best practices.
- Learn about Spring Boot Application Testing and Development with Testcontainers
- Subscribe to the Docker Newsletter.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
How to Dockerize a Django App: Step-by-Step Guide for Beginners
One of the best ways to make sure your web apps work well in different environments is to containerize them. Containers let you work in a more controlled way, which makes development and deployment easier. This guide will show you how to containerize a Django web app with Docker and explain why it’s a good idea.
We will walk through creating a Docker container for your Django application. Docker gives you a standardized environment, which makes it easier to get up and running and more productive. This tutorial is aimed at those new to Docker who already have some experience with Django. Let’s get started!

Why containerize your Django application?
Django apps can be put into containers to help you work more productively and consistently. Here are the main reasons why you should use Docker for your Django project:
- Creates a stable environment: Containers provide a stable environment with all dependencies installed, so you don’t have to worry about “it works on my machine” problems. This ensures that you can reproduce the app and use it on any system or server. Docker makes it simple to set up local environments for development, testing, and production.
- Ensures reproducibility and portability: A Dockerized app bundles all the environment variables, dependencies, and configurations, so it always runs the same way. This makes it easier to deploy, especially when you’re moving apps between environments.
- Facilitates collaboration between developers: Docker lets your team work in the same environment, so there’s less chance of conflicts from different setups. Shared Docker images make it simple for your team to get started with fewer setup requirements.
- Speeds up deployment processes: Docker makes it easier for developers to get started with a new project quickly. It removes the hassle of setting up development environments and ensures everyone is working in the same place, which makes it easier to merge changes from different developers.
Getting started with Django and Docker
Setting up a Django app in Docker is straightforward. You don’t need to do much more than add in the basic Django project files.
Tools you’ll need
To follow this guide, make sure you first:
- Install Docker Desktop and Docker Compose on your machine.
- Use a Docker Hub account to store and access Docker images.
- Make sure Django is installed on your system.
If you need help with the installation, you can find detailed instructions on the Docker and Django websites.
How to Dockerize your Django project
The following six steps include code snippets to guide you through the process.
Step 1: Set up your Django project
1. Initialize a Django project.
If you don’t have a Django project set up yet, you can create one with the following commands:
django-admin startproject my_docker_django_app cd my_docker_django_app
2. Create a requirements.txt
file.
In your project, create a requirements.txt
file to store dependencies:
pip freeze > requirements.txt
3. Update key environment settings.
You need to change some sections in the settings.py
file to enable them to be set using environment variables when the container is started. This allows you to change these settings depending on the environment you are working in.
# The secret key SECRET_KEY = os.environ.get("SECRET_KEY") DEBUG = bool(os.environ.get("DEBUG", default=0)) ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS","127.0.0.1").split(",")
Step 2: Create a Dockerfile
A Dockerfile is a script that tells Docker how to build your Docker image. Put it in the root directory of your Django project. Here’s a basic Dockerfile setup for Django:
# Use the official Python runtime image FROM python:3.13 # Create the app directory RUN mkdir /app # Set the working directory inside the container WORKDIR /app # Set environment variables # Prevents Python from writing pyc files to disk ENV PYTHONDONTWRITEBYTECODE=1 #Prevents Python from buffering stdout and stderr ENV PYTHONUNBUFFERED=1 # Upgrade pip RUN pip install --upgrade pip # Copy the Django project and install dependencies COPY requirements.txt /app/ # run this command to install all dependencies RUN pip install --no-cache-dir -r requirements.txt # Copy the Django project to the container COPY . /app/ # Expose the Django port EXPOSE 8000 # Run Django’s development server CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Each line in the Dockerfile serves a specific purpose:
FROM
: Selects the image with the Python version you need.WORKDIR
: Sets the working directory of the application within the container.ENV
: Sets the environment variables needed to build the applicationRUN
andCOPY
commands: Install dependencies and copy project files.EXPOSE
andCMD
: Expose the Django server port and define the startup command.
You can build the Django Docker container with the following command:
docker build -t django-docker .
To see your image, you can run:
docker image list
The result will look something like this:
REPOSITORY TAG IMAGE ID CREATED SIZE django-docker latest ace73d650ac6 20 seconds ago 1.55GB
Although this is a great start in containerizing the application, you’ll need to make a number of improvements to get it ready for production.
- The CMD
manage.py
is only meant for development purposes and should be changed for a WSGI server. - Reduce the size of the image by using a smaller image.
- Optimize the image by using a multistage build process.
Let’s get started with these improvements.
Update requirements.txt
Make sure to add gunicorn
to your requirements.txt
. It should look like this:
asgiref==3.8.1 Django==5.1.3 sqlparse==0.5.2 gunicorn==23.0.0 psycopg2-binary==2.9.10
Make improvements to the Dockerfile
The Dockerfile below has changes that solve the three items on the list. The changes to the file are as follows:
- Updated the
FROM python:3.13
image toFROM python:3.13-slim
. This change reduces the size of the image considerably, as the image now only contains what is needed to run the application. - Added a multi-stage build process to the Dockerfile. When you build applications, there are usually many files left on the file system that are only needed during build time and are not needed once the application is built and running. By adding a build stage, you use one image to build the application and then move the built files to the second image, leaving only the built code. Read more about multi-stage builds in the documentation.
- Add the Gunicorn WSGI server to the server to enable a production-ready deployment of the application.
# Stage 1: Base build stage FROM python:3.13-slim AS builder # Create the app directory RUN mkdir /app # Set the working directory WORKDIR /app # Set environment variables to optimize Python ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 # Upgrade pip and install dependencies RUN pip install --upgrade pip # Copy the requirements file first (better caching) COPY requirements.txt /app/ # Install Python dependencies RUN pip install --no-cache-dir -r requirements.txt # Stage 2: Production stage FROM python:3.13-slim RUN useradd -m -r appuser && \ mkdir /app && \ chown -R appuser /app # Copy the Python dependencies from the builder stage COPY --from=builder /usr/local/lib/python3.13/site-packages/ /usr/local/lib/python3.13/site-packages/ COPY --from=builder /usr/local/bin/ /usr/local/bin/ # Set the working directory WORKDIR /app # Copy application code COPY --chown=appuser:appuser . . # Set environment variables to optimize Python ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 # Switch to non-root user USER appuser # Expose the application port EXPOSE 8000 # Start the application using Gunicorn CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "3", "my_docker_django_app.wsgi:application"]
Build the Docker container image again.
docker build -t django-docker .
After making these changes, we can run a docker image list
again:
REPOSITORY TAG IMAGE ID CREATED SIZE django-docker latest 3c62f2376c2c 6 seconds ago 299MB
You can see a significant improvement in the size of the container.
The size was reduced from 1.6 GB to 299MB, which leads to faster a deployment process when images are downloaded and cheaper storage costs when storing images.
You could use docker init
as a command to generate the Dockerfile and compose.yml
file for your application to get you started.
Step 3: Configure the Docker Compose file
A compose.yml
file allows you to manage multi-container applications. Here, we’ll define both a Django container and a PostgreSQL database container.
The compose file makes use of an environment file called .env
, which will make it easy to keep the settings separate from the application code. The environment variables listed here are standard for most applications:
services: db: image: postgres:17 environment: POSTGRES_DB: ${DATABASE_NAME} POSTGRES_USER: ${DATABASE_USERNAME} POSTGRES_PASSWORD: ${DATABASE_PASSWORD} ports: - "5432:5432" volumes: - postgres_data:/var/lib/postgresql/data env_file: - .env django-web: build: . container_name: django-docker ports: - "8000:8000" depends_on: - db environment: DJANGO_SECRET_KEY: ${DJANGO_SECRET_KEY} DEBUG: ${DEBUG} DJANGO_LOGLEVEL: ${DJANGO_LOGLEVEL} DJANGO_ALLOWED_HOSTS: ${DJANGO_ALLOWED_HOSTS} DATABASE_ENGINE: ${DATABASE_ENGINE} DATABASE_NAME: ${DATABASE_NAME} DATABASE_USERNAME: ${DATABASE_USERNAME} DATABASE_PASSWORD: ${DATABASE_PASSWORD} DATABASE_HOST: ${DATABASE_HOST} DATABASE_PORT: ${DATABASE_PORT} env_file: - .env volumes: postgres_data:
And the example .env
file:
DJANGO_SECRET_KEY=your_secret_key DEBUG=True DJANGO_LOGLEVEL=info DJANGO_ALLOWED_HOSTS=localhost DATABASE_ENGINE=postgresql_psycopg2 DATABASE_NAME=dockerdjango DATABASE_USERNAME=dbuser DATABASE_PASSWORD=dbpassword DATABASE_HOST=db DATABASE_PORT=5432
Step 4: Update Django settings and configuration files
1. Configure database settings.
Update settings.py
to use PostgreSQL:
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.{}'.format( os.getenv('DATABASE_ENGINE', 'sqlite3') ), 'NAME': os.getenv('DATABASE_NAME', 'polls'), 'USER': os.getenv('DATABASE_USERNAME', 'myprojectuser'), 'PASSWORD': os.getenv('DATABASE_PASSWORD', 'password'), 'HOST': os.getenv('DATABASE_HOST', '127.0.0.1'), 'PORT': os.getenv('DATABASE_PORT', 5432), } }
2. Set ALLOWED_HOSTS
to read from environment files.
In settings.py
, set ALLOWED_HOSTS
to:
# 'DJANGO_ALLOWED_HOSTS' should be a single string of hosts with a , between each. # For example: 'DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1,[::1]' ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS","127.0.0.1").split(",")
3. Set the SECRET_KEY
to read from environment files.
In settings.py
, set SECRET_KEY
to:
# SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = os.environ.get("DJANGO_SECRET_KEY")
4. Set DEBUG
to read from environment files.
In settings.py
, set DEBUG
to:
# SECURITY WARNING: don't run with debug turned on in production! DEBUG = bool(os.environ.get("DEBUG", default=0))
Step 5: Build and run your new Django project
To build and start your containers, run:
docker compose up --build
This command will download any necessary Docker images, build the project, and start the containers. Once complete, your Django application should be accessible at http://localhost:8000
.
Step 6: Test and access your application
Once the app is running, you can test it by navigating to http://localhost:8000
. You should see Django’s welcome page, indicating that your app is up and running. To verify the database connection, try running a migration:
docker compose run django-web python manage.py migrate
Troubleshooting common issues with Docker and Django
Here are some common issues you might encounter and how to solve them:
- Database connection errors: If Django can’t connect to PostgreSQL, verify that your database service name matches in
compose.yml
andsettings.py
. - File synchronization issues: Use the
volumes
directive incompose.yml
to sync changes from your local files to the container. - Container restart loops or crashes: Use
docker compose logs
to inspect container errors and determine the cause of the crash.
Optimizing your Django web application
To improve your Django Docker setup, consider these optimization tips:
- Automate and secure builds: Use Docker’s multi-stage builds to create leaner images, removing unnecessary files and packages for a more secure and efficient build.
- Optimize database access: Configure database pooling and caching to reduce connection time and boost performance.
- Efficient dependency management: Regularly update and audit dependencies listed in
requirements.txt
to ensure efficiency and security.
Take the next step with Docker and Django
Containerizing your Django application with Docker is an effective way to simplify development, ensure consistency across environments, and streamline deployments. By following the steps outlined in this guide, you’ve learned how to set up a Dockerized Django app, optimize your Dockerfile for production, and configure Docker Compose for multi-container setups.
Docker not only helps reduce “it works on my machine” issues but also fosters better collaboration within development teams by standardizing environments. Whether you’re deploying a small project or scaling up for enterprise use, Docker equips you with the tools to build, test, and deploy reliably.
Ready to take the next step? Explore Docker’s powerful tools, like Docker Hub and Docker Scout, to enhance your containerized applications with scalable storage, governance, and continuous security insights.
Learn more
- Subscribe to the Docker Newsletter.
- Learn more about Docker commands, Docker Compose, and security in the Docker Docs.
- Find Dockerized Django projects for inspiration and guidance in GitHub.
- Discover Docker plugins that improve performance, logging, and security.
- Get the latest release of Docker Desktop.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
Mastering Peak Software Development Efficiency with Docker
In modern software development, businesses are searching for smarter ways to streamline workflows and deliver value faster. For developers, this means tackling challenges like collaboration and security head-on, while driving efficiency that contributes directly to business performance. But how do you address potential roadblocks before they become costly issues in production? The answer lies in optimizing the development inner loop — a core focus for the future of app development.
By identifying and resolving inefficiencies early in the development lifecycle, software development teams can overcome common engineering challenges such as slow dev cycles, spiraling infrastructure costs, and scaling challenges. With Docker’s integrated suite of development tools, developers can achieve new levels of engineering efficiency, creating high-quality software while delivering real business impact.
Let’s explore how Docker is transforming the development process, reducing operational overhead, and empowering teams to innovate faster.

Speed up software development lifecycles: Faster gains with less effort
A fast software development lifecycle is a crucial aspect for delivering value to users, maintaining a competitive edge, and staying ahead of industry trends. To enable this, software developers need workflows that minimize friction and allow them to iterate quickly without sacrificing quality. That’s where Docker makes a difference. By streamlining workflows, eliminating bottlenecks, and automating repetitive tasks, Docker empowers developers to focus on high-impact work that drives results.
Consistency across development environments is critical for improving speed. That’s why Docker helps developers create consistent environments across local, test, and production systems. In fact, a recent study reported developers experiencing a 6% increase in productivity when leveraging Docker Business. This consistency eliminates guesswork, ensuring developers can concentrate on writing code and improving features rather than troubleshooting issues. With Docker, applications behave predictably across every stage of the development lifecycle.
Docker also accelerates development by significantly reducing time spent on iteration and setup. More specifically, organizations leveraging Docker Business achieved a three-month faster time-to-market for revenue-generating applications. Engineering teams can move swiftly through development stages, delivering new features and bug fixes faster. By improving efficiency and adapting to evolving needs, Docker enables development teams to stay agile and respond effectively to business priorities.
Improve scaling agility: Flexibility for every scenario
Scalability is another essential for businesses to meet fluctuating demands and seize opportunities. Whether handling a surge in user traffic or optimizing resources during quieter periods, the ability to scale applications and infrastructure efficiently is a critical advantage. Docker makes this possible by enabling teams to adapt with speed and flexibility.
Docker’s cloud-native approach allows software engineering teams to scale up or down with ease to meet changing requirements. This flexibility supports experimentation with cutting-edge technologies like AI, machine learning, and microservices without disrupting existing workflows. With this added agility, developers can explore new possibilities while maintaining focus on delivering value.
Whether responding to market changes or exploring the potential of emerging tools, Docker equips companies to stay agile and keep evolving, ensuring their development processes are always ready to meet the moment.
Optimize resource efficiency: Get the most out of what you’ve got
Maximizing resource efficiency is crucial for reducing costs and maintaining agility. By making the most of existing infrastructure, businesses can avoid unnecessary expenses and minimize cloud scaling costs, meaning more resources for innovation and growth. Docker empowers teams to achieve this level of efficiency through its lightweight, containerized approach.
Docker containers are designed to be resource-efficient, enabling multiple applications to run in isolated environments on the same system. Unlike traditional virtual machines, containers minimize overhead while maintaining performance, consolidating workloads, and lowering the operational costs of maintaining separate environments. For example, a leading beauty company reduced infrastructure costs by 25% using Docker’s enhanced CPU and memory efficiency. This streamlined approach ensures businesses can scale intelligently while keeping infrastructure lean and effective.
By containerizing applications, businesses can optimize their infrastructure, avoiding costly upgrades while getting more value from their current systems. It’s a smarter, more efficient way to ensure your resources are working at their peak, leaving no capacity underutilized.
Establish cost-effective scaling: Growth without growing pains
Similarly, scaling efficiently is essential for businesses to keep up with growing demands, introduce new features, or adopt emerging technologies. However, traditional scaling methods often come with high upfront costs and complex infrastructure changes. Docker offers a smarter alternative, enabling development teams to scale environments quickly and cost-effectively.
With a containerized model, infrastructure can be dynamically adjusted to match changing needs. Containers are lightweight and portable, making it easy to scale up for spikes in demand or add new capabilities without overhauling existing systems. This flexibility reduces financial strain, allowing businesses to grow sustainably while maximizing the use of cloud resources.
Docker ensures that scaling is responsive and budget-friendly, empowering teams to focus on innovation and delivery rather than infrastructure costs. It’s a practical solution to achieve growth without unnecessary complexity or expense.
Software engineering efficiency at your fingertips
The developer community consistently ranks Docker highly, including choosing it as the most-used and most-admired developer tool in Stack Overflow’s Developer Survey. With Docker’s suite of products, teams can reach a new level of efficient software development by streamlining the dev lifecycle, optimizing resources, and providing agile, cost-effective scaling solutions. By simplifying complex processes in the development inner loop, Docker enables businesses to deliver high-quality software faster while keeping operational costs in check. This allows developers to focus on what they do best: building innovative, impactful applications.
By removing complexity, accelerating development cycles, and maximizing resource usage, Docker helps businesses stay competitive and efficient. And ultimately, their teams can achieve more in less time — meeting market demands with efficiency and quality.
Ready to supercharge your development team’s performance? Download our white paper to see how Docker can help streamline your workflow, improve productivity, and deliver software that stands out in the market.
Learn more
- Find a Docker plan that’s right for you.
- Subscribe to the Docker Newsletter.
- Get the latest release of Docker Desktop.
- New to Docker? Get started.
The Model Context Protocol: Simplifying Building AI apps with Anthropic Claude Desktop and Docker
Anthropic recently unveiled the Model Context Protocol (MCP), a new standard for connecting AI assistants and models to reliable data and tools. However, packaging and distributing MCP servers is very challenging due to complex environment setups across multiple architectures and operating systems. Docker is the perfect solution for this — it allows developers to encapsulate their development environment into containers, ensuring consistency across all team members’ machines and deployments consistent and predictable. In this blog post, we provide a few examples of using Docker to containerize Model Context Protocol (MCP) to simplify building AI applications.

What is Model Context Protocol (MCP)?
MCP (Model Context Protocol), a new protocol open-sourced by Anthropic, provides standardized interfaces for LLM applications to integrate with external data sources and tools. With MCP, your AI-powered applications can retrieve data from external sources, perform operations with third-party services, or even interact with local filesystems.
Among the use cases enabled by this protocol is the ability to expose custom tools to AI models. This provides key capabilities such as:
- Tool discovery: Helping LLMs identify tools available for execution
- Tool invocation: Enabling precise execution with the right context and arguments
Since its release, the developer community has been particularly energized. We asked David Soria Parra, Member of Technical Staff from Anthropic, why he felt MCP was having such an impact: “Our initial developer focus means that we’re no longer bound to one specific tool set. We are giving developers the power to build for their particular workflow.”
How does MCP work? What challenges exist?
MCP works by introducing the concept of MCP clients and MCP Servers — clients request resources and the servers handle the request and perform the requested action. MCP Clients are often embedded into LLM-based applications, such as the Claude Desktop App. The MCP Servers are launched by the client to then perform the desired work using any additional tools, languages, or processes needed to perform the work.
Examples of tools include filesystem access, GitHub and GitLab repo management, integrations with Slack, or retrieving or modifying state in Kubernetes clusters.

The goal of MCP servers is to provide reusable toolsets and reuse them across clients, like Claude Desktop – write one set of tools and reuse them across many LLM-based applications. But, packaging and distributing these servers is currently a challenge. Specifically:
- Environment conflicts: Installing MCP servers often requires specific versions of Node.js, Python, and other dependencies, which may conflict with existing installations on a user’s machine
- Lack of host isolation: MCP servers currently run on the host, granting access to all host files and resources
- Complex setup: MCP servers currently require users to download and configure all of the code and configure the environment, making adoption difficult
- Cross-platform challenges: Running the servers consistently across different architectures (e.g., x86 vs. ARM, Windows vs Mac) or operating systems introduces additional complexity
- Dependencies: Ensuring that server-specific runtime dependencies are encapsulated and distributed safely.
How does Docker help?
Docker solves these challenges by providing a standardized method and tooling to develop, package, and distribute applications, including MCP servers. By packaging these MCP servers as containers, the challenges of isolation or environment differences disappear. Users can simply run a container, rather than spend time installing dependencies and configuring the runtime.
Docker Desktop provides a development platform to build, test, and run these MCP servers. Docker Hub is the world’s largest repository of container images, making it the ideal choice to distribute containerized MCP servers. Docker Scout helps ensure images are kept secure and free of vulnerabilities. Docker Build Cloud helps you build images more quickly and reliably, especially when cross-platform builds are required.
The Docker suite of products brings benefits to both publishers and consumers — publishers can easily package and distribute their servers and consumers can easily download and run them with little to no configuration.
Again quoting David Soria Parra,
“Building an MCP server for ffmpeg would be a tremendously difficult undertaking without Docker. Docker is one of the most widely used packaging solutions for developers. The same way it solved the packaging problem for the cloud, it now has the potential to solve the packaging problem for rich AI agents”.

As we continue to explore how MCP allows us to connect to existing ecosystems of tools, we also envision MCP bridges to existing containerized tools.

Try it yourself with containerized Reference Servers
As part of publishing the specification, Anthropic published an initial set of reference servers. We have worked with the Anthropic team to create Docker images for these servers and make them available from the new Docker Hub mcp namespace.
Developers can try this out today using Claude Desktop as the MCP client and Docker Desktop to run any of the reference servers by updating your claude_desktop_config.json file.
The list of current servers documents how to update the claude_desktop_config.json to activate these MCP server docker containers on your local host.
Using Puppeteer to take and modify screenshots using Docker
This demo will use the Puppeteer MCP server to take a screenshot of a website and invert the colors using Claude Desktop and Docker Desktop. Doing this without a containerized environment requires quite a bit of setup, but is fairly trivial using containers.
- Update your claude_desktop_config.json file to include the following configuration:
For example, extending Claude Desktop to use puppeteer for browser automation and web scraping requires the following entry (which is fully documented here):
{ "mcpServers": { "puppeteer": { "command": "docker", "args": ["run", "-i", "--rm", "--init", "-e", "DOCKER_CONTAINER=true", "mcp/puppeteer"] } } }
- Restart Claude Desktop to apply the changed config file.
- Submit the following prompt using the Sonnet 3.5 model:
“Take a screenshot of docs.docker.com and then invert the colors“ - Claude will run through several consent screens ensuring that you’re okay running these new tools.
- After a brief moment, you’ll have your requested screenshot
What happened? Claude planned out a series of tool calls, starting the puppeteer MCP server in a container, and then used the headless browser in that container to navigate to a site, grab a screenshot, invert the colors on the page, and then finally grab a screenshot of the altered page.
Figure 4: Running Dockerized Puppeteer in Claude Desktop to invert colors on https://docs.docker.com/
Next steps
There’s already a lot that developers can try with this first set of servers. For an educational glimpse into what’s possible with database containers, we recommend that you connect the sqlite server container, and run the sample prompt that it provides. It’s an eye-opening display of what’s already possible today. Plus, the demo is containerized!
We’re busy adding more content to enable you to easily build and distribute your own MCP docker images. We are also encouraging and working closely with the community to package more Docker containers. Please reach out with questions in the discussion group.
Learn more
- Read the Docker Desktop release collection to see even more updates and innovation announcements.
- Subscribe to the Docker Navigator newsletter.
- Subscribe to the Docker Labs: GenAI newsletter.
- Discover the upgraded Docker plans.
- See what’s new in Docker Desktop.
Accelerate Your Docker Builds Using AWS CodeBuild and Docker Build Cloud
Containerized application development has revolutionized modern software delivery, but slow image builds in CI/CD pipelines can bring developer productivity to a halt. Even with AWS CodeBuild automating application testing and building, teams face challenges like resource constraints, inefficient caching, and complex multi-architecture builds that lead to delays, lower release frequency, and prolonged recovery times.
Enter Docker Build Cloud, a high-performance cloud service designed to streamline image builds, integrate seamlessly with AWS CodeBuild, and reduce build times dramatically. With Docker Build Cloud, you gain powerful cloud-based builders, shared caching, and native multi-architecture support — all while keeping your CI/CD pipelines efficient and your developers focused on delivering value faster.
In this post, we’ll explore how AWS CodeBuild combined with Docker Build Cloud tackles common bottlenecks, boosts build performance, and simplifies workflows, enabling teams to ship more quickly and reliably.

By using AWS CodeBuild, you can automate the build and testing of container applications, enabling the construction of efficient CI/CD workflows. AWS CodeBuild is also integrated with AWS Identity and Access Management (IAM), allowing detailed configuration of access permissions for build processes and control over AWS resources.
Container images built with AWS CodeBuild can be stored in Amazon Elastic Container Registry (Amazon ECR) and deployed to various AWS services, such as Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Fargate, or AWS Lambda (Figure 1). Additionally, these services can leverage AWS Graviton, which adopts Arm-based architectures, to improve price performance for compute workloads.

Challenges of container image builds with AWS CodeBuild
Regardless of the tool used, building container images in a CI pipeline often takes a significant amount of time. This can lead to the following issues:
- Reduced development productivity
- Lower release frequency
- Longer recovery time in case of failures
The main reasons why build times can be extended include:
1. Machines for building
Building container images requires substantial resources (CPU, RAM). If the machine specifications used in the CI pipeline are inadequate, build times can increase.
For simple container image builds, the impact may be minimal, but in cases of multi-stage builds or builds with many dependencies, the effect can be significant.
AWS CodeBuild allows changing instance types to improve these situations. However, such changes can apply to parts of the pipeline beyond container image builds, and they also increase costs.
Developers need to balance cost and build speed to optimize the pipeline.
2. Container image cache
In local development environments, Docker’s build cache can shorten rebuild times significantly by reusing previously built layers, avoiding redundant processing for unchanged parts of the Dockerfile. However, in cloud-based CI services, clean environments are used by default, so cache cannot be utilized, resulting in longer build times.
Although there are ways to use storage or container registries to leverage caching, these often are not employed because they introduce complexity in configuration and overhead from uploading and downloading cache data.
3. Multi-architecture builds (AMD64, Arm64)
To use Arm-based architectures like AWS Graviton in Amazon EKS or Amazon ECS, Arm64-compatible container image builds are required.
With changes in local environments, such as Apple Silicon, cases requiring multi-architecture support for AMD64 and Arm64 have increased. However, building images for different architectures (for example, building x86 on Arm, or vice versa) often requires emulation, which can further increase build times (Figure 2).
Although AWS CodeBuild provides both AMD64 and Arm64 instances, running them as separate pipelines is necessary, leading to more complex configurations and operations.

Accelerating container image builds with Docker Build Cloud
The Docker Build Cloud service executes the Docker image build process in the cloud, significantly reducing build time and improving developer productivity (Figure 3).

Particularly in CI pipelines, Docker Build Cloud enables faster container image builds without the need for significant changes or migrations to existing pipelines.
Docker Build Cloud includes the following features:
- High-performance cloud builders: Cloud builders equipped with 16 vCPUs and 32GB RAM are available. This allows for faster builds compared to local environments or resource-constrained CI services.
- Shared cache utilization: Cloud builders come with 200 GiB of shared cache, significantly reducing build times for subsequent builds. This cache is available without additional configuration, and Docker Build Cloud handles the cache maintenance for you.
- Multi-architecture support (AMD64, Arm64): Docker Build Cloud supports native builds for multi-architecture with a single command. By specifying
--platform linux/amd64,linux/arm64
in thedocker buildx build
command or using Bake, images for both Arm64 and AMD64 can be built simultaneously. This approach eliminates the need to split the pipeline for different architectures.
Architecture of AWS CodeBuild + Docker Build Cloud
Figure 4 shows an example of how to use Docker Build Cloud to accelerate container image builds in AWS CodeBuild:

- The AWS CodeBuild pipeline is triggered from a commit to the source code repository (AWS CodeCommit, GitHub, GitLab).
- Preparations for running Docker Build Cloud are made in AWS CodeBuild (Buildx installation, specifying Docker Build Cloud builders).
- Container images are built on Docker Build Cloud’s AMD64 and Arm64 cloud builders.
- The built AMD64 and Arm64 container images are pushed to Amazon ECR.
Setting up Docker Build Cloud
First, set up Docker Build Cloud. (Note that new Docker subscriptions already include a free tier for Docker Build Cloud.)
Then, log in with your Docker account and visit the Docker Build Cloud Dashboard to create new cloud builders.
Once the builder is successfully created, a guide is displayed for using it in local environments (Docker Desktop, CLI) or CI/CD environments (Figure 5).

Additionally, to use Docker Build Cloud from AWS CodeBuild, a Docker personal access token (PAT) is required. Store this token in AWS Secrets Manager for secure access.
Setting up the AWS CodeBuild pipeline
Next, set up the AWS CodeBuild pipeline. You should prepare an Amazon ECR repository to store the container images beforehand.
The following settings are used to create the AWS CodeBuild pipeline:
- AMD64 instance with 3GB memory and 2 vCPUs.
- Service role with permissions to push to Amazon ECR and access the Docker personal access token from AWS Secrets Manager.
The buildspec.yml
file is configured as follows:
version: 0.2
env:
variables:
ARCH: amd64
ECR_REGISTRY: [ECR Registry]
ECR_REPOSITORY: [ECR Repository]
DOCKER_ORG: [Docker Organization]
secrets-manager:
DOCKER_USER: ${SECRETS_NAME}:DOCKER_USER
DOCKER_PAT: ${SECRETS_NAME}:DOCKER_PAT
phases:
install:
commands:
# Installing Buildx
- BUILDX_URL=$(curl -s https://raw.githubusercontent.com/docker/actions-toolkit/main/.github/buildx-lab-releases.json | jq -r ".latest.assets[] | select(endswith(\"linux-$ARCH\"))")
- mkdir -vp ~/.docker/cli-plugins/
- curl --silent -L --output ~/.docker/cli-plugins/docker-buildx $BUILDX_URL
- chmod a+x ~/.docker/cli-plugins/docker-buildx
pre_build:
commands:
# Logging in to Amazon ECR
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_REGISTRY
# Logging in to Docker (Build Cloud)
- echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
# Specifying the cloud builder
- docker buildx create --use --driver cloud $DOCKER_ORG/demo
build:
commands:
# Image tag
- IMAGE_TAG=$(echo ${CODEBUILD_RESOLVED_SOURCE_VERSION} | head -c 7)
# Build container image & push to Amazon ECR
- docker buildx build --platform linux/amd64,linux/arm64 --push --tag "${ECR_REGISTRY}/${ECR_REPOSITORY}:${IMAGE_TAG}" .
In the install
phase, Buildx, which is necessary for using Docker Build Cloud, is installed.
Although Buildx may already be installed in AWS CodeBuild, it might be an unsupported version for Docker Build Cloud. Therefore, it is recommended to install the latest version.
In the pre_build
phase, the following steps are performed:
- Log in to Amazon ECR.
- Log in to Docker (Build Cloud).
- Specify the cloud builder.
In the build
phase, the image tag is specified, and the container image is built and pushed to Amazon ECR.
Instead of separating the build and push commands, using --push
to directly push the image to Amazon ECR helps avoid unnecessary file transfers, contributing to faster builds.
Results comparison
To make a comparison, an AWS CodeBuild pipeline without Docker Build Cloud is created. The same instance type (AMD64, 3GB memory, 2vCPU) is used, and the build is limited to AMD64 container images.
Additionally, Docker login is used to avoid the pull rate limit imposed by Docker Hub.
version: 0.2
env:
variables:
ECR_REGISTRY: [ECR Registry]
ECR_REPOSITORY: [ECR Repository]
secrets-manager:
DOCKER_USER: ${SECRETS_NAME}:DOCKER_USER
DOCKER_PAT: ${SECRETS_NAME}:DOCKER_PAT
phases:
pre_build:
commands:
# Logging in to Amazon ECR
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_REGISTRY
# Logging in to Docker
- echo "$DOCKER_PAT" | docker login --username $DOCKER_USER --password-stdin
build:
commands:
# Image tag
- IMAGE_TAG=$(echo ${CODEBUILD_RESOLVED_SOURCE_VERSION} | head -c 7)
# Build container image & push to Amazon ECR
- docker build --push --tag "${ECR_REGISTRY}/${ECR_REPOSITORY}:${IMAGE_TAG}" .
Figure 6 shows the result of the execution:

Figure 7 shows the execution result of the AWS CodeBuild pipeline using Docker Build Cloud:

The results may vary depending on the container images being built and the state of the cache, but it was possible to build container images much faster and achieve multi-architecture builds (AMD64 and Arm64) within a single pipeline.
Conclusion
Integrating Docker Build Cloud into a CI/CD pipeline using AWS CodeBuild can dramatically reduce build times and improve release frequency. This allows developers to maximize productivity while delivering value to users more quickly.
As mentioned previously, the new Docker subscription already includes a free tier for Docker Build Cloud. Take advantage of this opportunity to test how much faster you can build container images for your current projects.
Learn more
- Subscribe to the Docker Newsletter.
- Get the latest release of Docker Desktop.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
Let’s Get Containerized: Simplifying Complexity for Modern Businesses
Did you know that enterprise companies that implemented Docker saw a 126% return on investment (ROI) over three years? In today’s rapidly evolving business landscape, companies face relentless pressure to innovate while managing costs and complexity. Traditional software development methods often struggle to keep pace with technological advancements, leading to inconsistent environments, high operational costs, and slow deployment cycles. That’s where containerization comes in as a smart solution.

Rising technology costs are a concern
Businesses today are navigating a complex environment filled with evolving market demands and economic pressures. A recent survey revealed that 70% of executives expect economic conditions to worsen, driving concerns about inflation and cash flow. Another survey found that 50% of businesses have raised prices to combat rising costs, reflecting broader financial pressures. In this context, traditional software deployment methods often fall short, resulting in rigid, inconsistent environments that impede agility and delay feature releases.
As cloud services costs surge, expected to surpass $1 trillion in 2024, businesses face heightened financial and operational challenges. Outdated deployment methods struggle with modern applications’ complexity, leading to persistent issues and inefficiencies. This underscores the need for a more agile, cost-effective solution.
As the adoption of cloud and hybrid cloud environments accelerates, businesses need solutions that ensure seamless integration and portability across their entire IT ecosystem. Containers provide a key to achieving this, offering unmatched agility, scalability, and security. By embracing containers, organizations can create more adaptable, resilient, and future-proof software solutions.
The solution is a container-first approach
Containerization simplifies the development and deployment of applications by encapsulating them into self-contained units known as containers. Each container includes everything an application needs to run — its code, libraries, and dependencies — ensuring consistent performance across different environments, from development to production.
Similar to how shipping containers transformed the packaging and transport industry, containerization revolutionized development. Using containers, development teams can reduce errors, optimize resources, accelerate time to market, and more.
Key benefits of containerization
- Improved consistency: Containers guarantee that applications perform identically regardless of where they are deployed, eliminating the notorious “it works on my machine” problem.
- Cost efficiency: Containers reduce infrastructure costs by optimizing resource utilization. Unlike traditional virtual machines that require separate operating systems, containers share the same operating system (OS) kernel, leading to significant savings and better scalability.
- Faster time to market: Containers accelerate development and deployment cycles, allowing businesses to bring products and updates to market more quickly.
- Enhanced security: Containers provide isolation between applications, which helps manage vulnerabilities and prevent breaches from spreading, thereby enhancing overall security.
Seeing a true impact
A Forrester Consulting study found that enterprises using Docker experienced a three-month faster time to market for revenue-generating applications, along with notable gains in efficiency and speed. These organizations reduced their data center footprint, enhanced application delivery speeds, and saved on infrastructure costs, showcasing containerization’s tangible benefits.
For instance, Cloudflare, a company operating one of the world’s largest cloud networks, needed to address the complexities of managing a growing infrastructure and supporting over 1,000 developers. By adopting Docker’s containerization technology and leveraging innovations like manifest lists, Cloudflare successfully streamlined its development and deployment processes. Docker’s support for multi-architecture images and continuous improvements, such as IPv6 networking capabilities, allowed Cloudflare to manage complex application stacks more efficiently, ensuring consistency across diverse environments and enhancing overall agility.
Stepping into a brighter future
Containerization offers a powerful solution to modern business challenges, providing consistency, cost savings, and enhanced security. As companies face increasing complexity and market pressures, adopting a container-first approach can streamline development, improve operational efficiency, and maintain a competitive edge.
Ready to explore how containerization can drive operational excellence for your business? Our white paper Unlocking the Container: Enhancing Operational Performance through Containerization provides an in-depth analysis and actionable insights on leveraging containers to enhance your software development and deployment processes. Need containerization? Chat with us or explore more resources.
—
Are you navigating the ever-evolving world of developer tools and container technology? The Docker Newsletter is your essential resource, curated for Docker users like you. Keep your finger on the pulse of the Docker ecosystem. Subscribe now!
How to Dockerize a React App: A Step-by-Step Guide for Developers
If you’re anything like me, you love crafting sleek and responsive user interfaces with React. But, setting up consistent development environments and ensuring smooth deployments can also get complicated. That’s where Docker can help save the day.
As a Senior DevOps Engineer and Docker Captain, I’ve navigated the seas of containerization and witnessed firsthand how Docker can revolutionize your workflow. In this guide, I’ll share how you can dockerize a React app to streamline your development process, eliminate those pesky “it works on my machine” problems, and impress your colleagues with seamless deployments.
Let’s dive into the world of Docker and React!

Why containerize your React application?
You might be wondering, “Why should I bother containerizing my React app?” Great question! Containerization offers several compelling benefits that can elevate your development and deployment game, such as:
- Streamlined CI/CD pipelines: By packaging your React app into a Docker container, you create a consistent environment from development to production. This consistency simplifies continuous integration and continuous deployment (CI/CD) pipelines, reducing the risk of environment-specific issues during builds and deployments.
- Simplified dependency management: Docker encapsulates all your app’s dependencies within the container. This means you won’t have to deal with the infamous “works on my machine” dilemma anymore. Every team member and deployment environment uses the same setup, ensuring smooth collaboration.
- Better resource management: Containers are lightweight and efficient. Unlike virtual machines, Docker containers share the host system’s kernel, which means you can run more containers on the same hardware. This efficiency is crucial when scaling applications or managing resources in a production environment.
- Isolated environment without conflict: Docker provides isolated environments for your applications. This isolation prevents conflicts between different projects’ dependencies or configurations on the same machine. You can run multiple applications, each with its own set of dependencies, without them stepping on each other’s toes.
Getting started with React and Docker
Before we go further, let’s make sure you have everything you need to start containerizing your React app.
Tools you’ll need
- Docker Desktop: Download and install it from the official Docker website.
- Node.js and npm: Grab them from the Node.js official site.
- React app: Use an existing project or create a new one using
create-react-app
.
A quick introduction to Docker
Docker offers a comprehensive suite of enterprise-ready tools, cloud services, trusted content, and a collaborative community that helps streamline workflows and maximize development efficiency. The Docker productivity platform allows developers to package applications into containers — standardized units that include everything the software needs to run. Containers ensure that your application runs the same, regardless of where it’s deployed.
How to dockerize your React project
Now let’s get down to business. We’ll go through the process step by step and, by the end, you’ll have your React app running inside a Docker container.
Step 1: Set up the React app
If you already have a React app, you can skip this step. If not, let’s create one:
npx create-react-app my-react-app cd my-react-app
This command initializes a new React application in a directory called my-react-app
.
Step 2: Create a Dockerfile
In the root directory of your project, create a file named Dockerfile
(no extension). This file will contain instructions for building your Docker image.
Dockerfile for development
For development purposes, you can create a simple Dockerfile:
# Use the latest LTS version of Node.js FROM node:18-alpine # Set the working directory inside the container WORKDIR /app # Copy package.json and package-lock.json COPY package*.json ./ # Install dependencies RUN npm install # Copy the rest of your application files COPY . . # Expose the port your app runs on EXPOSE 3000 # Define the command to run your app CMD ["npm", "start"]
What’s happening here?
FROM node:18-alpine
: We’re using the latest LTS version of Node.js based on Alpine Linux.WORKDIR /app
: Sets the working directory inside the container.*COPY package.json ./**
: Copiespackage.json
andpackage-lock.json
to the working directory.RUN npm install
: Installs the dependencies specified inpackage.json
.COPY . .
: Copies all the files from your local directory into the container.EXPOSE 3000
: Exposes port 3000 on the container (React’s default port).CMD ["npm", "start"]
: Tells Docker to runnpm start
when the container launches.
Production Dockerfile with multi-stage build
For a production-ready image, we’ll use a multi-stage build to optimize the image size and enhance security.
# Build Stage FROM node:18-alpine AS build WORKDIR /app COPY package*.json ./ RUN npm install COPY . . RUN npm run build # Production Stage FROM nginx:stable-alpine AS production COPY --from=build /app/build /usr/share/nginx/html EXPOSE 80 CMD ["nginx", "-g", "daemon off;"]
Explanation
- Build stage:
FROM node:18-alpine AS build:
Uses Node.js 18 for building the app.RUN npm run build
: Builds the optimized production files.
- Production stage:
FROM nginx
: Uses Nginx to serve static files.COPY --from=build /app/build /usr/share/nginx/html
: Copies the build output from the previous stage.EXPOSE 80
: Exposes port 80.CMD ["nginx", "-g", "daemon off;"]
: Runs Nginx in the foreground.
Benefits
- Smaller image size: The final image contains only the production build and Nginx.
- Enhanced security: Excludes development dependencies and Node.js runtime from the production image.
- Performance optimization: Nginx efficiently serves static files.
Step 3: Create a .dockerignore file
Just like .gitignore
helps Git ignore certain files, .dockerignore
tells Docker which files or directories to exclude when building the image. Create a .dockerignore
file in your project’s root directory:
node_modules npm-debug.log Dockerfile .dockerignore .git .gitignore .env
Excluding unnecessary files reduces the image size and speeds up the build process.
Step 4: Build and run your dockerized React app
Navigate to your project’s root directory and run:
docker build -t my-react-app .
This command tags the image with the name my-react-app
and specifies the build context (current directory). By default, this will build the final production stage from your multi-stage Dockerfile, resulting in a smaller, optimized image.
If you have multiple stages in your Dockerfile and need to target a specific build stage (such as the build
stage), you can use the --target
option. For example:
docker build -t my-react-app-dev --target build .
Note: Building with --target build
creates a larger image because it includes the build tools and dependencies needed to compile your React app. The production image (built using –target production
) on the other hand, is much smaller because it only contains the final build files.
Running the Docker container
For the development image:
docker run -p 3000:3000 my-react-app-dev
For the production image:
docker run -p 80:80 my-react-app
Accessing your application
Next, open your browser and go to:
http://localhost:3000
(for development)http://localhost
(for production)
You should see your React app running inside a Docker container.
Step 5: Use Docker Compose for multi-container setups
Here’s an example of how a React frontend app can be configured as a service using Docker Compose.
Create a compose.yml
file:
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
- ./node_modules:/app/node_modules
environment:
NODE_ENV: development
stdin_open: true
tty: true
command: npm start
Explanation
services:
Defines a list of services (containers).web:
The name of our service.build: .:
Builds the Dockerfile in the current directory.ports:
Maps port 3000 on the container to port 3000 on the host.volumes:
Mounts the current directory and node_modules for hot-reloading.environment:
Sets environment variables.stdin_open
andtty:
Keep the container running and interactive.
Step 6: Publish your image to Docker Hub
Sharing your Docker image allows others to run your app without setting up the environment themselves.
Log in to Docker Hub:
docker login
Enter your Docker Hub username and password when prompted.
Tag your image:
docker tag my-react-app your-dockerhub-username/my-react-app
Replace your-dockerhub-username
with your actual Docker Hub username.
Push the image:
docker push your-dockerhub-username/my-react-app
Your image is now available on Docker Hub for others to pull and run.
Pull and run the image:
docker pull your-dockerhub-username/my-react-app
docker run -p 80:80 your-dockerhub-username/my-react-app
Anyone can now run your app by pulling the image.
Handling environment variables securely
Managing environment variables securely is crucial to protect sensitive information like API keys and database credentials.
Using .env files
Create a .env
file in your project root:
REACT_APP_API_URL=https://api.example.com
Update your compose.yml
:
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/app
- ./node_modules:/app/node_modules
env_file:
- .env
stdin_open: true
tty: true
command: npm start
Security note: Ensure your .env
file is added to .gitignore
and .dockerignore
to prevent it from being committed to version control or included in your Docker image.
To start all services defined in a compose.yml in detached mode, the command is:
docker compose up -d
Passing environment variables at runtime
Alternatively, you can pass variables when running the container:
docker run -p 3000:3000 -e REACT_APP_API_URL=https://api.example.com my-react-app-dev
Using Docker Secrets (advanced)
For sensitive data in a production environment, consider using Docker Secrets to manage confidential information securely.
Production Dockerfile with multi-stage builds
When preparing your React app for production, multi-stage builds keep things lean and focused. They let you separate the build process from the final runtime environment, so you only ship what you need to serve your app. This not only reduces image size but also helps prevent unnecessary packages or development dependencies from sneaking into production.
The following is an example that goes one step further: We’ll create a dedicated build stage, a development environment stage, and a production stage. This approach ensures you can develop comfortably while still ending up with a streamlined, production-ready image.
# Stage 1: Build the React app
FROM node:18-alpine AS build
WORKDIR /app
# Leverage caching by installing dependencies first
COPY package.json package-lock.json ./
RUN npm install --frozen-lockfile
# Copy the rest of the application code and build for production
COPY . ./
RUN npm run build
# Stage 2: Development environment
FROM node:18-alpine AS development
WORKDIR /app
# Install dependencies again for development
COPY package.json package-lock.json ./
RUN npm install --frozen-lockfile
# Copy the full source code
COPY . ./
# Expose port for the development server
EXPOSE 3000
CMD ["npm", "start"]
# Stage 3: Production environment
FROM nginx:alpine AS production
# Copy the production build artifacts from the build stage
COPY --from=build /app/build /usr/share/nginx/html
# Expose the default NGINX port
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
What’s happening here?
- build stage: The first stage uses the official Node.js image to install dependencies, run the build, and produce an optimized, production-ready React build. By copying only your
package.json
andpackage-lock.json
before runningnpm install
, you leverage Docker’s layer caching, which speeds up rebuilds when your code changes but your dependencies don’t. - development stage: Need a local environment with hot-reloading for rapid iteration? This second stage sets up exactly that. It installs dependencies again (using the same caching trick) and starts the development server on port 3000, giving you the familiar
npm start
experience inside Docker. - production stage: Finally, the production stage uses a lightweight NGINX image to serve your static build artifacts. This stripped-down image doesn’t include Node.js or unnecessary development tools — just your optimized app and a robust web server. It keeps things clean, secure, and efficient.
This structured approach makes it a breeze to switch between development and production environments. You get fast feedback loops while coding, plus a slim, optimized final image ready for deployment. It’s a best-of-both-worlds solution that will streamline your React development workflow.
Troubleshooting common issues with Docker and React
Even with the best instructions, issues can arise. Here are common problems and how to fix them.
Issue: “Port 3000 is already in use”
Solution: Either stop the service using port 3000 or map your app to a different port when running the container.
docker run -p 4000:3000 my-react-app
Access your app at http://localhost:4000
.
Issue: Changes aren’t reflected during development
Solution: Use Docker volumes to enable hot-reloading. In your compose.yml
, ensure you have the following under volumes
:
volumes: - .:/app - ./node_modules:/app/node_modules
This setup allows your local changes to be mirrored inside the container.
Issue: Slow build times
Solution: Optimize your Dockerfile to leverage caching. Copy only package.json
and package-lock.json
before running npm install
. This way, Docker caches the layer unless these files change.
COPY package*.json ./ RUN npm install COPY . .
Issue: Container exits immediately
Cause: The React development server may not keep the container running by default.
Solution: Ensure you’re running the container interactively:
docker run -it -p 3000:3000 my-react-app
Issue: File permission errors
Solution: Adjust file permissions or specify a user in the Dockerfile using the USER
directive.
# Add before CMD USER node
Issue: Performance problems on macOS and Windows
File-sharing mechanisms between the host system and Docker containers introduce significant overhead on macOS and Windows, especially when working with large repositories or projects containing many files. Traditional methods like osxfs
and gRPC FUSE
often struggle to scale efficiently in these environments.
Solutions:
Enable synchronized file shares (Docker Desktop 4.27+): Docker Desktop 4.27+ introduces synchronized file shares, which significantly enhance bind mount performance by creating a high-performance, bidirectional cache of host files within the Docker Desktop VM.
Key benefits:
- Optimized for large projects: Handles monorepos or repositories with thousands of files efficiently.
- Performance improvement: Resolves bottlenecks seen with older file-sharing mechanisms.
- Real-time synchronization: Automatically syncs filesystem changes between the host and container in near real-time.
- Reduced file ownership conflicts: Minimizes issues with file permissions between host and container.
How to enable:
- Open Docker Desktop and go to Settings > Resources > File Sharing.
- In the Synchronized File Shares section, select the folder to share and click Initialize File Share.
- Use bind mounts in your
compose.yml
or Docker CLI commands that point to the shared directory.
Optimize with .syncignore
: Create a .syncignore
file in the root of your shared directory to exclude unnecessary files (e.g., node_modules, .git/
) for better performance.
Example .syncignore
file:
node_modules .git/ *.log
Example in compose.yml
:
services: web: build: . volumes: - ./app:/app ports: - "3000:3000" environment: NODE_ENV: development
Leverage WSL 2 on Windows: For Windows users, Docker’s WSL 2 backend offers near-native Linux performance by running the Docker engine in a lightweight Linux VM.
How to enable WSL 2 backend:
- Ensure Windows 10 version 2004 or higher is installed.
- Install the Windows Subsystem for Linux 2.
- In Docker Desktop, go to Settings > General and enable Use the WSL 2 based engine.
Use updated caching options in volume mounts: Although legacy options like :cached
and :delegated
are deprecated, consistency modes still allow optimization:
consistent
: Strict consistency (default).cached
: Allows the host to cache contents.delegated
: Allows the container to cache contents.
Example volume configuration:
volumes: - type: bind source: ./app target: /app consistency: cached
Optimizing your React Docker setup
Let’s enhance our setup with some advanced techniques.
Reducing image size
Every megabyte counts, especially when deploying to cloud environments.
- Use smaller base images: Alpine-based images are significantly smaller.
- Clean up after installing dependencies:
RUN npm install && npm cache clean --force
- Avoid copying unnecessary files: Use
.dockerignore
effectively.
Leveraging Docker build cache
Ensure that you’re not invalidating the cache unnecessarily. Only copy files that are required for each build step.
Using Docker layers wisely
Each command in your Dockerfile creates a new layer. Combine commands where appropriate to reduce the number of layers.
RUN npm install && npm cache clean --force
Conclusion
Dockerizing your React app is a game-changer. It brings consistency, efficiency, and scalability to your development workflow. By containerizing your application, you eliminate environment discrepancies, streamline deployments, and make collaboration a breeze.
So, the next time you’re setting up a React project, give Docker a shot. It will make your life as a developer significantly easier. Welcome to the world of containerization!
Learn more
- Subscribe to the Docker Newsletter.
- Get the latest release of Docker Desktop.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
Beyond Containers: Unveiling the Full Potential of Docker for Cloud-Native Development
As organizations strive to stay competitive in an increasingly complex digital world, the pressure to innovate quickly and securely is at an all-time high. Development teams face challenges that range from complex workflows and growing security concerns to ensuring seamless collaboration across distributed environments. Addressing these challenges requires tools that optimize every stage of the CI/CD pipeline, from the developer’s inner loop to production.
This is where Docker comes in. Initially known for revolutionizing containerization, Docker has evolved far beyond its roots to become a powerful suite of products that supports cloud-native development workflows. It’s not just about containers anymore; it’s about empowering developers to build and ship high-quality applications faster and more efficiently. Docker is about automating repetitive tasks, securing applications throughout the entire development lifecycle, and enabling collaboration at scale. By providing the right tools for developers, DevOps teams, and enterprise decision-makers, Docker drives innovation, streamlines processes, and creates measurable value for businesses.

What does Docker do?
At its core, Docker provides a suite of software development tools that enhance productivity, improve security, and seamlessly integrate with your existing CI/CD pipeline. While still closely associated with containers, Docker has evolved into much more than just a containerization solution. Its products support the entire development lifecycle, empowering teams to automate key tasks, improve the consistency of their work, and ship applications faster and more securely.
Here’s how Docker’s suite of products benefits both individual developers and large-scale enterprises:
- Automation: Docker automates repetitive tasks within the development process, allowing developers to focus on what matters most: writing code. Whether they’re building images, managing dependencies, or testing applications, developers can use Docker to streamline their workflows and accelerate development cycles.
- Security: Security is built into Docker from the start. Docker provides features like proactive vulnerability monitoring with Docker Scout and robust access control mechanisms. These built-in security features help ensure your applications are secure, reducing risks from malicious actors, CVEs, or other vulnerabilities.
- CI/CD integration: Docker’s seamless integration with existing CI/CD pipelines offers profound enhancements to ensure that teams can smoothly pass high-quality applications from local development through testing and into production.
- Multi-cloud compatibility: Docker supports flexible, multi-cloud development, allowing teams to build applications in one environment and migrate them to the cloud with minimized risk. This flexibility is key for businesses looking to scale, increase cloud adoption, and even upgrade from legacy apps.
The impact on team-based efficiency and enterprise value
Docker is designed not only to empower individual developers but also to elevate the entire team’s productivity while delivering tangible business value. By streamlining workflows, enhancing collaboration, and ensuring security, Docker makes it easier for teams to scale operations and deliver high-impact software with speed.
Streamlined development processes
One of Docker’s primary goals is to simplify development processes. Repetitive tasks such as environment setup, debugging, and dependency management have historically eaten up a lot of developers’ time. Docker removes these inefficiencies, allowing teams to focus on what really matters: building great software. Tools like Docker Desktop, Docker Hub, and Docker Build Cloud help accelerate build processes, while standardized environments ensure that developers spend less time dealing with system inconsistencies and more time coding.
Enterprise-level security and governance
For enterprise decision-makers, security and governance are top priorities. Docker addresses these concerns by providing comprehensive security features that span the entire development lifecycle. Docker Scout proactively monitors for vulnerabilities, ensuring that potential security threats are identified early, before they make their way into production. Additionally, Docker offers fine-grained control over who can access resources within the platform, with features like Image Access Management (IAM) and Resource Access Management (RAM) that ensure the security of developer environments without impairing productivity.
Measurable impact on business value
The value Docker delivers isn’t just in improved developer experience — it directly impacts the bottom line. By automating repetitive tasks in the developer’s inner loop and enhancing integration with the CI/CD pipeline, Docker reduces operational costs while accelerating the delivery of high-quality applications. Developers are able to move faster, iterate quickly, and deliver more reliable software, all of which contribute to lower operational expenses and higher developer satisfaction.
In fact, Docker’s ability to simplify workflows and secure applications means that developers can spend less time troubleshooting and more time building new features. For businesses, this translates to higher productivity and, ultimately, greater profitability.
Collaboration at scale: Empowering teams to work together more effectively
In modern development environments, teams are often distributed across different locations, sometimes even in different time zones. Docker enables effective collaboration at scale by providing standardized tools and environments that help teams work seamlessly together, regardless of where they are. Docker’s suite also helps ensure that teams are all on the same page when it comes to development, security, testing, and more.
Consistent environments for team workflows
One of Docker’s most powerful features is the ability to ensure consistency across different development environments. A Docker container encapsulates everything needed to run an application, including the code, libraries, and dependencies so that applications run the same way on every system. This means developers can work in a standardized environment, reducing the likelihood of errors caused by environment inconsistencies and making collaboration between team members smoother and more reliable.
Simplified CI/CD pipelines
Docker enhances the developer’s inner loop by automating workflows and providing consistent environments, creating efficiencies that ripple through the entire software delivery pipeline. This ripple effect of efficiency can be seen in features like advanced caching with Docker Build Cloud, on-demand and consistent test environments with Testcontainers Cloud, embedded security with Docker Scout, and more. These tools, combined with Docker’s standardized environments, allow developers to collaborate effectively to move from code to production faster and with fewer errors.
GenAI and innovative development
Docker equips developers to meet the demands of today while exploring future possibilities, including streamlining workflows for emerging AI/ML and GenAI applications. By simplifying the adoption of new tools for AI/ML development, Docker empowers organizations to meet present-day demands while also tapping into emerging technologies. These innovations help developers write better code faster while reducing the complexity of their workflows, allowing them to focus more on innovation.
A suite of tools for growth and innovation
Docker isn’t just a containerization tool — it’s a comprehensive suite of software development tools that empower cloud-native teams to streamline workflows, boost productivity, and deliver secure, scalable applications faster. Whether you’re an enterprise scaling workloads securely or a development team striving for speed and consistency, Docker’s integrated suite provides the tools to accelerate innovation while maintaining control.
Ready to unlock the full potential of Docker? Start by exploring our range of solutions and discover how Docker can transform your development processes today. If you’re looking for hands-on guidance, our experts are here to help — contact us to see how Docker can drive success for your team.
Take the next step toward building smarter, more efficient applications. Let’s scale, secure, and simplify your workflows together.
Learn more
- Find a Docker plan that’s right for you.
- Subscribe to the Docker Newsletter.
- Get the latest release of Docker Desktop.
- New to Docker? Get started.
Why Testcontainers Cloud Is a Game-Changer Compared to Docker-in-Docker for Testing Scenarios
Navigating the complex world of containerized testing environments can be challenging, especially when dealing with Docker-in-Docker (DinD). As a senior DevOps engineer and Docker Captain, I’ve seen firsthand the hurdles that teams face with DinD, and here I’ll share why Testcontainers Cloud is a transformative alternative that’s reshaping the way we handle container-based testing.

Understanding Docker-in-Docker
Docker-in-Docker allows you to run Docker within a Docker container. It’s like Inception for containers — a Docker daemon running inside a Docker container, capable of building and running other containers.
How Docker-in-Docker works
- Nested Docker daemons: In a typical Docker setup, the Docker daemon runs on the host machine, managing containers directly on the host’s operating system. With DinD, you start a Docker daemon inside a container. This inner Docker daemon operates independently, enabling the container to build and manage its own set of containers.
- Privileged mode and access to host resources: To run Docker inside a Docker container, the container needs elevated privileges. This is achieved by running the container in privileged mode using the
--privileged
flag:
docker run --privileged -d docker:dind
- The
--privileged
flag grants the container almost all the capabilities of the host machine, including access to device files and the ability to perform system administration tasks. Although this setup enables the inner Docker daemon to function, it poses significant security risks, as it can potentially allow the container to affect the host system adversely.
- Filesystem considerations: The inner Docker daemon stores images and containers within the file system of the DinD container, typically under
/var/lib/docker
. Because Docker uses advanced file system features like copy-on-write layers, running an inner Docker daemon within a containerized file system (which may itself use such features) can lead to complex interactions and potential conflicts. - Cgroups and namespace isolation: Docker relies on Linux kernel features like cgroups and namespaces for resource isolation and management. When running Docker inside a container, these features must be correctly configured to allow nesting. This process can introduce additional complexity in ensuring that resource limits and isolation behave as expected.
Why teams use Docker-in-Docker
- Isolated build environments: DinD allows each continuous integration (CI) job to run in a clean, isolated Docker environment, ensuring that builds and tests are not affected by residual state from previous jobs or other jobs running concurrently.
- Consistency across environments: By encapsulating the Docker daemon within a container, teams can replicate the same Docker environment across different stages of the development pipeline, from local development to CI/CD systems.
Challenges with DinD
Although DinD provides certain benefits, it also introduces significant challenges, such as:
- Security risks: Running containers in privileged mode can expose the host system to security vulnerabilities, as the container gains extensive access to host resources.
- Stability issues: Nested containers can lead to storage driver conflicts and other instability issues, causing unpredictable build failures.
- Complex debugging: Troubleshooting issues in a nested Docker environment can be complicated, as it involves multiple layers of abstraction and isolation.
Real-world challenges
Although Docker-in-Docker might sound appealing, it often introduces more problems than it solves. Before diving into those challenges, let’s briefly discuss Testcontainers and its role in modern testing practices.
What is Testcontainers?
Testcontainers is a popular open source library designed to support integration testing by providing lightweight, disposable instances of common databases, web browsers, or any service that can run in a Docker container. It allows developers to write tests that interact with real instances of external resources, rather than relying on mocks or stubs.
Key features of Testcontainers
- Realistic testing environment: By using actual services in containers, tests are more reliable and closer to real-world scenarios.
- Isolation: Each test session, or even each test can run in a clean environment, reducing flakiness due to shared state.
- Easy cleanup: Containers are ephemeral and are automatically cleaned up after tests, preventing resource leaks.
Dependency on the Docker daemon
A core component of Testcontainers’ functionality lies in its interaction with the Docker daemon. Testcontainers orchestrates Docker resources by starting and stopping containers as needed for tests. This tight integration means that access to a Docker environment is essential wherever the tests are run.
The DinD challenge with Testcontainers in CI
When teams try to include Testcontainers-based integration testing in their CI/CD pipelines, they often face the challenge of providing Docker access within the CI environment. Because Testcontainers requires communication with the Docker daemon, many teams resort to using Docker-in-Docker to emulate a Docker environment inside the CI job.
However, this approach introduces significant challenges, especially when trying to scale Testcontainers usage across the organization.
Case study: The CI pipeline nightmare
We had a Jenkins CI pipeline that utilized Testcontainers for integration tests. To provide the necessary Docker environment, we implemented DinD. Initially, it seemed to work fine, but soon we encountered:
- Unstable builds: Random failures due to storage driver conflicts and issues with nested container layers. The nested Docker environment sometimes clashed with the host, causing unpredictable behavior.
- Security concerns: Running containers in privileged mode raised red flags during security audits. Because DinD requires privileged mode to function correctly, it posed significant security risks, potentially allowing containers to access the host system.
- Performance bottlenecks: Builds were slow, and resource consumption was high. The overhead of running Docker within Docker led to longer feedback loops, hindering developer productivity.
- Complex debugging: Troubleshooting nested containers became time-consuming. Logs and errors were difficult to trace through the multiple layers of containers, making issue resolution challenging.
We spent countless hours trying to patch these issues, but it felt like playing a game of whack-a-mole.
Why Testcontainers Cloud is a better choice
Testcontainers Cloud is a cloud-based service designed to simplify and enhance your container-based testing. By offloading container execution to the cloud, it provides a secure, scalable, and efficient environment for your integration tests.
How TestContainers Cloud addresses DinD’s shortcomings
Enhanced security
- No more privileged mode: Eliminates the need for running containers in privileged mode, reducing the attack surface.
- Isolation: Tests run in isolated cloud environments, minimizing risks to the host system.
- Compliance-friendly: Easier to pass security audits without exposing the Docker socket or granting elevated permissions.
Improved performance
- Scalability: Leverage cloud resources to run tests faster and handle higher loads.
- Resource efficiency: Offloading execution frees up local and CI/CD resources.
Simplified configuration
- Plug-and-play integration: Minimal changes are required to switch from local Docker to Testcontainers Cloud.
- No nested complexity: Avoid the intricacies and pitfalls of nested Docker daemons.
Better observability and debugging
- Detailed logs: Access comprehensive logs through the Testcontainers Cloud dashboard.
- Real-time monitoring: Monitor containers and resources in real time with enhanced visibility.
Getting started with Testcontainers Cloud
Let’s dive into how you can get the most out of Testcontainers Cloud.
Switching to Testcontainers Cloud allows you to run tests without needing a local Docker daemon:
- No local Docker required: Testcontainers Cloud handles container execution in the cloud.
- Consistent environment: Ensures that your tests run in the same environment across different machines.
Additionally, you can easily integrate Testcontainers Cloud into your CI pipeline to run the same tests without scaling your CI infrastructure.
Using Testcontainers Cloud with GitHub Actions
Here’s how you can set up Testcontainers Cloud in your GitHub Actions workflow.
1. Create a new service account
- Log in to Testcontainers Cloud dashboard.
- Navigate to Service Accounts:
- Create a new service account dedicated to your CI environment.
- Generate an access token:
- Copy the access token. Remember, you can only view it once, so store it securely.
2. Set the TC_CLOUD_TOKEN
environment variable
- In GitHub Actions:
- Go to your repository’s Settings > Secrets and variables > Actions.
- Add a new Repository Secret named
TC_CLOUD_TOKEN
and paste the access token.
3. Add Testcontainers Cloud to your workflow
Update your GitHub Actions workflow (.github/workflows/ci.yml
) to include the Testcontainers Cloud setup.
Example workflow:
name: CI Pipeline on: push: branches: [ main ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 # ... other preparation steps (dependencies, compilation, etc.) ... - name: Set up Java uses: actions/setup-java@v3 with: distribution: 'temurin' java-version: '17' - name: Setup Testcontainers Cloud Client uses: atomicjar/testcontainers-cloud-setup-action@v1 with: token: ${{ secrets.TC_CLOUD_TOKEN }} # ... steps to execute your tests ... - name: Run Tests run: ./mvnw test
Notes:
- The
atomicjar/testcontainers-cloud-setup-action
GitHub Action automates the installation and authentication of the Testcontainers Cloud Agent in your CI environment. - Ensure that your
TC_CLOUD_TOKEN
is kept secure using GitHub’s encrypted secrets.
Clarifying the components: Testcontainers Cloud Agent and Testcontainers Cloud
To make everything clear:
- Testcontainers Cloud Agent (CLI in CI environments): In CI environments like GitHub Actions, you use the Testcontainers Cloud Agent (installed via the GitHub Action or command line) to connect your CI jobs to Testcontainers Cloud.
- Testcontainers Cloud: The cloud service that runs your containers, offloading execution from your CI environment.
In CI environments:
- Use the Testcontainers Cloud Agent (CLI) within your CI jobs.
- Authenticate using the
TC_CLOUD_TOKEN
. - Tests executed in the CI environment will use Testcontainers Cloud.
Monitoring and debugging
Take advantage of the Testcontainers Cloud dashboard:
- Session logs: View logs for individual test sessions.
- Container details: Inspect container statuses and resource usage.
- Debugging: Access container logs and output for troubleshooting.
Why developers prefer Testcontainers Cloud over DinD
Real-world impact
After integrating Testcontainers Cloud, our team observed the following:
- Faster build times: Tests ran significantly faster due to optimized resource utilization.
- Reduced maintenance: Less time spent on debugging and fixing CI pipeline issues.
- Enhanced security: Eliminated the need for privileged mode, satisfying security audits.
- Better observability: Improved logging and monitoring capabilities.
Addressing common concerns
Security and compliance
- Data isolation: Each test runs in an isolated environment.
- Encrypted communication: Secure data transmission.
- Compliance: Meets industry-standard security practices.
Cost considerations
- Efficiency gains: Time saved on maintenance offsets the cost.
- Resource optimization: Reduces the need for expensive CI infrastructure.
Compatibility
- Multi-language support: Works with Java, Node.js, Python, Go, .NET, and more.
- Seamless integration: Minimal changes required to existing test code.
Conclusion
Switching to Testcontainers Cloud, with the help of the Testcontainers Cloud Agent, has been a game-changer for our team and many others in the industry. It addresses the key pain points associated with Docker-in-Docker and offers a secure, efficient, and developer-friendly alternative.
Key takeaways
- Security: Eliminates the need for privileged containers and Docker socket exposure.
- Performance: Accelerates test execution with scalable cloud resources.
- Simplicity: Simplifies configuration and reduces maintenance overhead.
- Observability: Enhances debugging with detailed logs and monitoring tools.
As someone who has navigated these challenges, I recommend trying Testcontainers Cloud. It’s time to move beyond the complexities of DinD and adopt a solution designed for modern development workflows.
Additional resources
- Testcontainers Cloud documentation:
- Testcontainers library docs:
- Best practices:
- Community support:
How to Install Docker on Windows Server
Containers have become a fundamental part of modern application development and deployment. Whether you’re looking to streamline development processes or run isolated applications efficiently, Docker containers offer the perfect solution. In this guide, we’ll explore how to run Docker containers on Windows Server 2022, an essential step for businesses and developers moving toward a […]
The post How to Install Docker on Windows Server appeared first on Collabnix.
Model-Based Testing with Testcontainers and Jqwik
When testing complex systems, the more edge cases you can identify, the better your software performs in the real world. But how do you efficiently generate hundreds or thousands of meaningful tests that reveal hidden bugs? Enter model-based testing (MBT), a technique that automates test case generation by modeling your software’s expected behavior.
In this demo, we’ll explore the model-based testing technique to perform regression testing on a simple REST API.
We’ll use the jqwik test engine on JUnit 5 to run property and model-based tests. Additionally, we’ll use Testcontainers to spin up Docker containers with different versions of our application.

Model-based testing
Model-based testing is a method for testing stateful software by comparing the tested component with a model that represents the expected behavior of the system. Instead of manually writing test cases, we’ll use a testing tool that:
- Takes a list of possible actions supported by the application
- Automatically generates test sequences from these actions, targeting potential edge cases
- Executes these tests on the software and the model, comparing the results
In our case, the actions are simply the endpoints exposed by the application’s API. For the demo’s code examples, we’ll use a basic service with a CRUD REST API that allows us to:
- Find an employee by their unique employee number
- Update an employee’s name
- Get a list of all the employees from a department
- Register a new employee

Once everything is configured and we finally run the test, we can expect to see a rapid sequence of hundreds of requests being sent to the two stateful services:

Docker Compose
Let’s assume we need to switch the database from Postgres to MySQL and want to ensure the service’s behavior remains consistent. To test this, we can run both versions of the application, send identical requests to each, and compare the responses.
We can set up the environment using a Docker Compose that will run two versions of the app:
- Model (
mbt-demo:postgres
): The current live version and our source of truth. - Tested version (
mbt-demo:mysql
): The new feature branch under test.
services: ## MODEL app-model: image: mbt-demo:postgres # ... depends_on: - postgres postgres: image: postgres:16-alpine # ... ## TESTED app-tested: image: mbt-demo:mysql # ... depends_on: - mysql mysql: image: mysql:8.0 # ...
Testcontainers
At this point, we could start the application and databases manually for testing, but this would be tedious. Instead, let’s use Testcontainers’ ComposeContainer to automate this with our Docker Compose file during the testing phase.
In this example, we’ll use jqwik as our JUnit 5 test runner. First, let’s add the jqwik and Testcontainers and the jqwik-testcontainers dependencies to our pom.xml
:
<dependency> <groupId>net.jqwik</groupId> <artifactId>jqwik</artifactId> <version>1.9.0</version> <scope>test</scope> </dependency> <dependency> <groupId>net.jqwik</groupId> <artifactId>jqwik-testcontainers</artifactId> <version>0.5.2</version> <scope>test</scope> </dependency> <dependency> <groupId>org.testcontainers</groupId> <artifactId>testcontainers</artifactId> <version>1.20.1</version> <scope>test</scope> </dependency>
As a result, we can now instantiate a ComposeContainer
and pass our test docker-compose
file as argument:
@Testcontainers class ModelBasedTest { @Container static ComposeContainer ENV = new ComposeContainer(new File("src/test/resources/docker-compose-test.yml")) .withExposedService("app-tested", 8080, Wait.forHttp("/api/employees").forStatusCode(200)) .withExposedService("app-model", 8080, Wait.forHttp("/api/employees").forStatusCode(200)); // tests }
Test HTTP client
Now, let’s create a small test utility that will help us execute the HTTP requests against our services:
class TestHttpClient { ApiResponse<EmployeeDto> get(String employeeNo) { /* ... */ } ApiResponse<Void> put(String employeeNo, String newName) { /* ... */ } ApiResponse<List<EmployeeDto>> getByDepartment(String department) { /* ... */ } ApiResponse<EmployeeDto> post(String employeeNo, String name) { /* ... */ } record ApiResponse<T>(int statusCode, @Nullable T body) { } record EmployeeDto(String employeeNo, String name) { } }
Additionally, in the test class, we can declare another method that helps us create TestHttpClients
for the two services started by the ComposeContainer
:
static TestHttpClient testClient(String service) { int port = ENV.getServicePort(service, 8080); String url = "http://localhost:%s/api/employees".formatted(port); return new TestHttpClient(service, url); }
jqwik
Jqwik is a property-based testing framework for Java that integrates with JUnit 5, automatically generating test cases to validate properties of code across diverse inputs. By using generators to create varied and random test inputs, jqwik enhances test coverage and uncovers edge cases.
If you’re new to jqwik, you can explore their API in detail by reviewing the official user guide. While this tutorial won’t cover all the specifics of the API, it’s essential to know that jqwik allows us to define a set of actions we want to test.
To begin with, we’ll use jqwik’s @Property
annotation — instead of the traditional @Test
— to define a test:
@Property void regressionTest() { TestHttpClient model = testClient("app-model"); TestHttpClient tested = testClient("app-tested"); // ... }
Next, we’ll define the actions, which are the HTTP calls to our APIs and can also include assertions.
For instance, the GetOneEmployeeAction
will try to fetch a specific employee from both services and compare the responses:
record ModelVsTested(TestHttpClient model, TestHttpClient tested) {} record GetOneEmployeeAction(String empNo) implements Action<ModelVsTested> { @Override public ModelVsTested run(ModelVsTested apps) { ApiResponse<EmployeeDto> actual = apps.tested.get(empNo); ApiResponse<EmployeeDto> expected = apps.model.get(empNo); assertThat(actual) .satisfies(hasStatusCode(expected.statusCode())) .satisfies(hasBody(expected.body())); return apps; } }
Additionally, we’ll need to wrap these actions within Arbitrary
objects. We can think of Arbitraries
as objects implementing the factory design pattern that can generate a wide variety of instances of a type, based on a set of configured rules.
For instance, the Arbitrary
returned by employeeNos()
can generate employee numbers by choosing a random department from the configured list and concatenating a number between 0 and 200:
static Arbitrary<String> employeeNos() { Arbitrary<String> departments = Arbitraries.of("Frontend", "Backend", "HR", "Creative", "DevOps"); Arbitrary<Long> ids = Arbitraries.longs().between(1, 200); return Combinators.combine(departments, ids).as("%s-%s"::formatted); }
Similarly, getOneEmployeeAction()
returns an Aribtrary
action based on a given Arbitrary
employee number:
static Arbitrary<GetOneEmployeeAction> getOneEmployeeAction() { return employeeNos().map(GetOneEmployeeAction::new); }
After declaring all the other Actions
and Arbitraries
, we’ll create an ActionSequence
:
@Provide Arbitrary<ActionSequence<ModelVsTested>> mbtJqwikActions() { return Arbitraries.sequences( Arbitraries.oneOf( MbtJqwikActions.getOneEmployeeAction(), MbtJqwikActions.getEmployeesByDepartmentAction(), MbtJqwikActions.createEmployeeAction(), MbtJqwikActions.updateEmployeeNameAction() )); } static Arbitrary<Action<ModelVsTested>> getOneEmployeeAction() { /* ... */ } static Arbitrary<Action<ModelVsTested>> getEmployeesByDepartmentAction() { /* ... */ } // same for the other actions
Now, we can write our test and leverage jqwik to use the provided actions to test various sequences. Let’s create the ModelVsTested
tuple and use it to execute the sequence of actions against it:
@Property void regressionTest(@ForAll("mbtJqwikActions") ActionSequence<ModelVsTested> actions) { ModelVsTested testVsModel = new ModelVsTested( testClient("app-model"), testClient("app-tested") ); actions.run(testVsModel); }
That’s it — we can finally run the test! The test will generate a sequence of thousands of requests trying to find inconsistencies between the model and the tested service:
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] PUT /api/employeesFrontend-129?name=v INFO com.etr.demo.utils.TestHttpClient -- [app-model] PUT /api/employeesFrontend-129?name=v INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees/Frontend-129 INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees/Frontend-129 INFO com.etr.demo.utils.TestHttpClient -- [app-tested] POST /api/employees { name=sdxToS, empNo=Frontend-91 } INFO com.etr.demo.utils.TestHttpClient -- [app-model] POST /api/employees { name=sdxToS, empNo=Frontend-91 } INFO com.etr.demo.utils.TestHttpClient -- [app-tested] PUT /api/employeesFrontend-4?name=PZbmodNLNwX INFO com.etr.demo.utils.TestHttpClient -- [app-model] PUT /api/employeesFrontend-4?name=PZbmodNLNwX INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees/Frontend-4 INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees/Frontend-4 INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees?department=ٺ⯟桸 INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees?department=ٺ⯟桸 ...
Catching errors
If we run the test and check the logs, we’ll quickly spot a failure. It appears that when searching for employees by department with the argument ٺ⯟桸
the model produces an internal server error, while the test version returns 200 OK:
Original Sample --------------- actions: ActionSequence[FAILED]: 8 actions run [ UpdateEmployeeAction[empNo=Creative-13, newName=uRhplM], CreateEmployeeAction[empNo=Backend-184, name=aGAYQ], UpdateEmployeeAction[empNo=Backend-3, newName=aWCxzg], UpdateEmployeeAction[empNo=Frontend-93, newName=SrJTVwMvpy], UpdateEmployeeAction[empNo=Frontend-129, newName=v], CreateEmployeeAction[empNo=Frontend-91, name=sdxToS], UpdateEmployeeAction[empNo=Frontend-4, newName=PZbmodNLNwX], GetEmployeesByDepartmentAction[department=ٺ⯟桸] ] final currentModel: ModelVsTested[model=com.etr.demo.utils.TestHttpClient@5dc0ff7d, tested=com.etr.demo.utils.TestHttpClient@64920dc2] Multiple Failures (1 failure) -- failure 1 -- expected: 200 but was: 500
Upon investigation, we find that the issue arises from a native SQL query using Postgres-specific syntax to retrieve data. While this was a simple issue in our small application, model-based testing can help uncover unexpected behavior that may only surface after a specific sequence of repetitive steps pushes the system into a particular state.
Wrap up
In this post, we provided hands-on examples of how model-based testing works in practice. From defining models to generating test cases, we’ve seen a powerful approach to improving test coverage and reducing manual effort. Now that you’ve seen the potential of model-based testing to enhance software quality, it’s time to dive deeper and tailor it to your own projects.
Clone the repository to experiment further, customize the models, and integrate this methodology into your testing strategy. Start building more resilient software today!
Thank you to Emanuel Trandafir for contributing this post.
Learn more
- Clone the model-based testing practice repo.
- Subscribe to the Docker Newsletter.
- Visit the Testcontainers website.
- Get started with Testcontainers Cloud by creating a free account.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
Testcontainers and Playwright

Leveraging Testcontainers for Complex Integration Testing in Mattermost Plugins
This post was contributed by Jesús Espino, Principal Engineer at Mattermost.
In the ever-evolving software development landscape, ensuring robust and reliable plugin integration is no small feat. For Mattermost, relying solely on mocks for plugin testing became a limitation, leading to brittle tests and overlooked integration issues. Enter Testcontainers, an open source tool that provides isolated Docker environments, making complex integration testing not only feasible but efficient.
In this blog post, we dive into how Mattermost has embraced Testcontainers to overhaul its testing strategy, achieving greater automation, improved accuracy, and seamless plugin integration with minimal overhead.

The previous approach
In the past, Mattermost relied heavily on mocks to test plugins. While this approach had its merits, it also had significant drawbacks. The tests were brittle, meaning they would often break when changes were made to the codebase. This made the tests challenging to develop and maintain, as developers had to constantly update the mocks to reflect the changes in the code.
Furthermore, the use of mocks meant that the integration aspect of testing was largely overlooked. The tests did not account for how the different components of the system interacted with each other, which could lead to unforeseen issues in the production environment.
The previous approach additionally did not allow for proper integration testing in an automated way. The lack of automation made the testing process time-consuming and prone to human error. These challenges necessitated a shift in Mattermost’s testing strategy, leading to the adoption of Testcontainers for complex integration testing.
Mattermost’s approach to integration testing
Testcontainers for Go
Mattermost uses Testcontainers for Go to create an isolated testing environment for our plugins. This testing environment includes the Mattermost server, the PostgreSQL server, and, in certain cases, an API mock server. The plugin is then installed on the Mattermost server, and through regular API calls or end-to-end testing frameworks like Playwright, we perform the required testing.
We have created a specialized Testcontainers module for the Mattermost server. This module uses PostgreSQL as a dependency, ensuring that the testing environment closely mirrors the production environment. Our module allows the developer to install and configure any plugin you want in the Mattermost server easily.
To improve the system’s isolation, the Mattermost module includes a container for the server and a container for the PostgreSQL database, which are connected through an internal Docker network.
Additionally, the Mattermost module exposes utility functionality that allows direct access to the database, to the Mattermost API through the Go client, and some utility functions that enable admins to create users, channels, teams, and change the configuration, among other things. This functionality is invaluable for performing complex operations during testing, including API calls, users/teams/channel creation, configuration changes, or even SQL query execution.
This approach provides a powerful set of tools with which to set up our tests and prepare everything for verifying the behavior that we expect. Combined with the disposable nature of the test container instances, this makes the system easy to understand while remaining isolated.
This comprehensive approach to testing ensures that all aspects of the Mattermost server and its plugins are thoroughly tested, thereby increasing their reliability and functionality. But, let’s see a code example of the usage.
We can start setting up our Mattermost environment with a plugin like this:
pluginConfig := map[string]any{} options := []mmcontainer.MattermostCustomizeRequestOption{ mmcontainer.WithPlugin("sample.tar.gz", "sample", pluginConfig), } mattermost, err := mmcontainer.RunContainer(context.Background(), options...) defer mattermost.Terminate(context.Background()
Once your Mattermost instance is initialized, you can create a test like this:
func TestSample(t *testing.T) { client, err mattermost.GetClient() require.NoError(t, err) reqURL := client.URL + "/plugins/sample/sample-endpoint" resp, err := client.DoAPIRequest(context.Background(), http.MethodGet, reqURL, "", "") require.NoError(t, err, "cannot fetch url %s", reqURL) defer resp.Body.Close() bodyBytes, err := io.ReadAll(resp.Body) require.NoError(t, err) require.Equal(t, 200, resp.StatusCode) assert.Contains(t, string(bodyBytes), "sample-response") }
Here, you can decide when you tear down your Mattermost instance and recreate it. Once per test? Once per a set of tests? It is up to you and depends strictly on your needs and the nature of your tests.
Testcontainers for Node.js
In addition to using Testcontainers for Go, Mattermost leverages Testcontainers for Node.js to set up our testing environment. In case you’re unfamiliar, Testcontainers for Node.js is a Node.js library that provides similar functionality to Testcontainers for Go. Using Testcontainers for Node.js, we can set up our environment in the same way we did with Testcontainers for Go. This allows us to write Playwright tests using JavaScript and run them in the isolated Mattermost environment created by Testcontainers, enabling us to perform integration testing that interacts directly with the plugin user interface. The code is available on GitHub.
This approach provides the same advantages as Testcontainers for Go, and it allows us to use a more interface-based testing tool — like Playwright in this case. Let me show a bit of code with the Node.js and Playwright implementation:
We start and stop the containers for each test:
test.beforeAll(async () => { mattermost = await RunContainer() }) test.afterAll(async () => { await mattermost.stop(); })
Then we can use our Mattermost instance like any other server running to run our Playwright tests:
test.describe('sample slash command', () => { test('try to run a sample slash command', async ({ page }) => { const url = mattermost.url() await login(page, url, "regularuser", "regularuser") await expect(page.getByLabel('town square public channel')).toBeVisible(); await page.getByTestId('post_textbox').fill("/sample run") await page.getByTestId('SendMessageButton').click(); await expect(page.getByText('Sample command result', { exact: true })).toBeVisible(); await logout(page) }); });
With these two approaches, we can create integration tests covering the API and the interface without having to mock or use any other synthetic environment. Also, we can test things in absolute isolation because we consciously decide whether we want to reuse the Testcontainers instances. We can also reach a high degree of isolation and thereby avoid the flakiness induced by contaminated environments when doing end-to-end testing.
Examples of usage
Currently, we are using this approach for two plugins.
1. Mattermost AI Copilot
This integration helps users in their daily tasks using AI large language models (LLMs), providing things like thread and meeting summarization and context-based interrogation.
This plugin has a rich interface, so we used the Testcontainers for Node and Playwright approach to ensure we could properly test the system through the interface. Also, this plugin needs to call the AI LLM through an API. To avoid that resource-heavy task, we use an API mock, another container that simulates any API.
This approach gives us confidence in the server-side code but in the interface side as well, because we can ensure that we aren’t breaking anything during the development.
2. Mattermost MS Teams plugin
This integration is designed to connect MS Teams and Mattermost in a seamless way, synchronizing messages between both platforms.
For this plugin, we mainly need to do API calls, so we used Testcontainers for Go and directly hit the API using a client written in Go. In this case, again, our plugin depends on a third-party service: the Microsoft Graph API from Microsoft. For that, we also use an API mock, enabling us to test the whole plugin without depending on the third-party service.
We still have some integration tests with the real Teams API using the same Testcontainers infrastructure to ensure that we are properly handling the Microsoft Graph calls.
Benefits of using Testcontainers libraries
Using Testcontainers for integration testing offers benefits, such as:
- Isolation: Each test runs in its own Docker container, which means that tests are completely isolated from each other. This approach prevents tests from interfering with one another and ensures that each test starts with a clean slate.
- Repeatability: Because the testing environment is set up automatically, the tests are highly repeatable. This means that developers can run the tests multiple times and get the same results, which increases the reliability of the tests.
- Ease of use: Testcontainers is easy to use, as it handles all the complexities of setting up and tearing down Docker containers. This allows developers to focus on writing tests rather than managing the testing environment.
Testing made easy with Testcontainers
Mattermost’s use of Testcontainers libraries for complex integration testing in their plugins is a testament to the power and versatility of Testcontainers.
By creating a well-isolated and repeatable testing environment, Mattermost ensures that our plugins are thoroughly tested and highly reliable.
Learn more
- Subscribe to the Docker Newsletter.
- Visit the Testcontainers website.
- Get started with Testcontainers Cloud by creating a free account.
- Vote on what’s next! Check out our public roadmap.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
Exploring Docker for DevOps: What It Is and How It Works
DevOps aims to dramatically improve the software development lifecycle by bringing together the formerly separated worlds of development and operations using principles that strive to make software creation more efficient. DevOps practices form a useful roadmap to help developers in every phase of the development lifecycle, from code planning to building, task automation, testing, monitoring, releasing, and deploying applications.
As DevOps use continues to expand, many developers and organizations find that the Docker containerization platform integrates well as a crucial component of DevOps practices. Using Docker, developers have the advantage of being able to collaborate in standardized environments using local containers and remote container tools where they can write their code, share their work, and collaborate.
In this blog post, we will explore the use of Docker within DevOps practices and explain how the combination can help developers create more efficient and powerful workflows.

What is DevOps?
DevOps practices are beneficial in the world of developers and code creation because they encourage smart planning, collaboration, and orderly processes and management throughout the software development pipeline. Without unified DevOps principles, code is typically created in individual silos that can hamper creativity, efficient management, speed, and quality.
Bringing software developers, operations teams, and processes together under DevOps principles, can improve both developer and organizational efficiency through increased collaboration, agility, and innovation. DevOps brings these positive changes to organizations by constantly integrating user feedback regarding application features, shortcomings, and code glitches and — by making changes as needed on the fly — reducing operational and security risks in production code.
CI/CD
In addition to collaboration, DevOps principles are built around procedures for continuous integration/improvement (CI) and continuous deployment/delivery (CD) of code, shortening the cycle between development and production. This CI/CD approach lets teams more quickly adapt to feedback and thus build better applications from code conception all the way through to end-user experiences.
Using CI, developers can frequently and automatically integrate their changes into the source code as they create new code, while the CD side tests and delivers those vetted changes to the production environment. By integrating CI/CD practices, developers can create cleaner and safer code and resolve bugs ahead of production through automation, collaboration, and strong QA pipelines.
What is Docker?
The Docker containerization platform is a suite of tools, standards, and services that enable DevOps practices for application developers. Docker is used to develop, ship, and run applications within lightweight containers. This approach allows developers to separate their applications from their business infrastructure, giving them the power to deliver better code more quickly.
The Docker platform enables developers to package and run their application code in lightweight, local, standardized containers, which provide a loosely isolated environment that contains everything needed to run the application — including tools, packages, and libraries. By using Docker containers on a Docker client, developers can run an application without worrying about what is installed on the host, giving them huge flexibility, security, and collaborative advantages over virtual machines.
In this controlled environment, developers can use Docker to create, monitor, and push their applications into a test environment, run automated and manual tests as needed, correct bugs, and then validate the code before deploying it for use in production.
Docker also allows developers to run many containers simultaneously on a host, while allowing those same containers to be shared with others. Such a collaborative workspace can foster healthy and direct communications between developers, allowing development processes to become easier, more accurate, and more secure.
Containers vs. virtualization
Containers are an abstraction that packages application code and dependencies together. Instances of the container can then be created, started, stopped, moved, or deleted using the Docker API or command-line interface (CLI). Containers can be connected to one or more networks, be attached to storage, or create new images based on their current states.
Containers differ from virtual machines, which use a software abstraction layer on top of computer hardware, allowing the hardware to be shared more efficiently in multiple instances that will run individual applications. Docker containers require fewer physical hardware resources than virtual machines, and they also offer faster startup times and lower overhead. This makes Docker ideal for high-velocity environments, where rapid software development cycles and scalability are crucial.
Basic components of Docker
The basic components of Docker include:
- Docker images: Docker images are the blueprints for your containers. They are read-only templates that contain the instructions for creating a Docker container. You can think of a container image as a snapshot of a specific state of your application.
- Containers: Containers are the instances of Docker images. They are lightweight and portable, encapsulating your application along with its dependencies. Containers can be created, started, stopped, moved, and deleted using simple Docker commands.
- Dockerfiles: A Dockerfile is a text document containing a series of instructions on how to build a Docker image. It includes commands for specifying the base image, copying files, installing dependencies, and setting up the environment.
- Docker Engine: Docker Engine is the core component of Docker. It’s a client-server application that includes a server with a long-running daemon process, APIs for interacting with the daemon, and a CLI client.
- Docker Desktop: Docker Desktop is a commercial product sold and supported by Docker, Inc. It includes the Docker Engine and other open source components, proprietary components, and features like an intuitive GUI, synchronized file shares, access to cloud resources, debugging features, native host integration, governance, security features, and administrative settings management.
- Docker Hub: Docker Hub is a public registry where you can store and share Docker images. It serves as a central place to find official Docker images and user-contributed images. You can also use Docker Hub to automate your workflows by connecting it to your CI/CD pipelines.
Basic Docker commands
Docker commands are simple and intuitive. For example:
docker run
: Runs a Docker container from a specified image. For example,docker run hello-world
will run a container from the “hello-world” image.docker build
: Builds an image from a Dockerfile. For example,docker build -t my-app .
will build an image named “my-app” from the Dockerfile in the current directory.docker pull
: Pulls an image from Docker Hub. For example,docker pull nginx
will download the latest NGINX image from Docker Hub.docker ps
: Lists all running containers. For example,docker ps -a
will list all containers, including stopped ones.docker stop
: Stops a running Docker container. For example,docker stop <container_id>
will stop the container with the specified ID.docker rm
: Removes a stopped container. For example,docker rm <container_id>
will remove the container with the specified ID.
How Docker is used in DevOps
One of Docker’s most important benefits for developers is its critical role in facilitating CI/CD in the application development process. This makes it easier and more seamless for developers to work together to create better code.
Docker is a build environment where developers can get predictable results building and testing their applications inside Docker containers and where it is easier to get consistent, reproducible results compared to other development environments. Developers can use Dockerfiles to define the exact requirements needed for their build environments, including programming runtimes, operating systems, binaries, and more.
Using Docker as a build environment also makes application maintenance easier. For example, you can update to a new version of a programming runtime by just changing a tag or digest in a Dockerfile. That is easier than the process required on a virtual machine to manually reinstall a newer version and update the related configuration files.
Automated testing is also easier using Docker Hub, which can automatically test changes to source code repositories using containers or push applications into a test environment and run automated and manual tests.
Docker can be integrated with DevOps tools including Jenkins, GitLab, Kubernetes, and others, simplifying DevOps processes by automating pipelines and scaling operations as needed.
Benefits of using Docker for DevOps
Because the Docker containers used for development are the same ones that are moved along for testing and production, the Docker platform provides consistency across environments and delivers big benefits to developer teams and operations managers. Each Docker container is isolated from others being run, eliminating conflicting dependencies. Developers are empowered to build, run, and test their code while collaborating with others and using all the resources available to them within the Docker platform environment.
Other benefits to developers include speed and agility, resource efficiency, error reduction, integrated version control, standardization, and the ability to write code once and run it on any system. Additionally, applications built on Docker can be pushed easily to customers on any computing environment, assuring quick, easy, and consistent delivery and deployment process.
4 Common Docker challenges in DevOps
Implementing Docker in a DevOps environment can offer numerous benefits, but it also presents several challenges that teams must navigate:
1. Learning curve and skills gap
Docker introduces new concepts and technologies that require teams to acquire new skills. This can be a significant hurdle, especially if the team lacks experience with containerization. Docker’s robust documentation and guides and our international community can help new users quickly ramp up.
2. Security concerns
Ensuring the security of containerized applications involves addressing vulnerabilities in container images, managing secrets, and implementing network policies. Misconfigurations and running containers with root privileges can lead to security risks. Docker does, however, provide security guardrails for both administrators and developers.
The Docker Business subscription provides security and management at scale. For example, administrators can enforce sign-ins across Docker products for developers and efficiently manage, scale, and secure Docker Desktop instances using DevOps security controls like Enhanced Container Isolation and Registry Access Management.
Additionally, Docker offers security-focused tools, like Docker Scout, which helps administrators and developers secure the software supply chain by proactively monitoring image vulnerabilities and implementing remediation strategies. Introduced in 2024, Docker Scout health scores rate the security and compliance status of container images within Docker Hub, providing a single, quantifiable metric to represent the “health” of an image. This feature addresses one of the key friction points in developer-led software security — the lack of security expertise — and makes it easier for developers to turn critical insights from tools into actionable steps.
3. Microservice architectures
Containers and the ecosystem around them are specifically geared towards microservice architectures. You can run a monolith in a container, but you will not be able to leverage all of the benefits and paradigms of containers in that way. Instead, containers can be a useful gateway to microservices. Users can start pulling out individual pieces from a monolith into more containers over time.
4. Image management
Image management in Docker can also be a challenge for developers and teams as they search private registries and community repositories for images to use in building their applications. Docker Image Access Management can help with this challenge as it gives administrators control over which types of images — such as Docker Official Images, Docker Verified Publisher Images, or community images — their developers can pull for use from Docker Hub. Docker Hub tries to help by publishing only official images and verifying content from trusted partners.
Using Image Access Management controls helps prevent developers from accidentally using an untrusted, malicious community image as a component of their application. Note that Docker Image Access Management is available only to customers of the company’s top Docker Business services offering.
Another important tool here is Docker Scout. It is built to help organizations better protect their software supply chain security when using container images, which consist of layers and software packages that may be susceptible to security vulnerabilities. Docker Scout helps with this issue by proactively analyzing container images and compiling a Software Bill of Materials (SBOM), which is a detailed inventory of code included in an application or container. That SBOM is then matched against a continuously updated vulnerability database to pinpoint and correct security weaknesses to make the code more secure.
More information and help about using Docker can be found in the Docker Trainings page, which offers training webcasts and other resources to assist developers and teams to negotiate their Docker landscapes and learn fresh skills to solve their technical inquiries.
Examples of DevOps using Docker
Improving DevOps workflows is a major goal for many enterprises as they struggle to improve operations and developer productivity and to produce cleaner, more secure, and better code.
The Warehouse Group
At The Warehouse Group, New Zealand’s largest retail store chain with some 300 stores, Docker was introduced in 2016 to revamp its systems and processes after previous VMware deployments resulted in long setup times, inconsistent environments, and slow deployment cycles.
“One of the key benefits we have seen from using Docker is that it enables a very flexible work environment,” said Matt Law, the chapter lead of DevOps for the company. “Developers can build and test applications locally on their own machines with consistency across environments, thanks to Docker’s containerization approach.”
Docker brought new autonomy to the company’s developers so they could test ideas and find new and better ways to solve bottlenecks, said Law. “That is a key philosophy that we have here — enabling developers to experiment with tooling to help them prove or disprove their philosophies or theories.”
Ataccama Corporation
Another Docker customer, Ataccama Corp., a Toronto-based data management software vendor, adopted Docker and DevOps practices when it moved to scale its business by moving from physical servers to cloud platforms like AWS and Azure to gain agility, scalability, and cost efficiencies using containerization.
For Ataccama, Docker delivered rapid deployment, simplified application management, and seamless portability between environments, which brought accelerated feature development, increased efficiency and performance, valuable microservices capabilities, and required security and high availability. To boost the value of Docker for its developers and IT managers, Ataccama provided container and DevOps skills training and promoted collaboration to make Docker an integral tool and platform for the company and its operations.
“What makes Docker a class apart is its support for open standards like Open Container Initiative (OCI) and its amazing flexibility,” said Vladimir Mikhalev, senior DevOps engineer at Ataccama. “It goes far beyond just running containers. With Docker, we can build, share, and manage containerized apps seamlessly across infrastructure in a way that most tools can’t match.”
The most impactful feature of Docker is its ability to bundle an app, configuration, and dependencies into a single standardized unit, said Mikhalev. “This level of encapsulation has been a game-changer for eliminating environment inconsistencies.”
Wrapping up
Docker provides a transformative impact for enterprises that have adopted DevOps practices. The Docker platform enables developers to create, collaborate, test, monitor, ship, and run applications within lightweight containers, giving them the power to deliver better code more quickly.
Docker simplifies and empowers development processes, enhancing productivity and improving the reliability of applications across different environments.
Find the right Docker subscription to bolster your DevOps workflow.
Learn more
- Read Docker for Web Developers: Getting Started with the Basics.
- Subscribe to the Docker Newsletter.
- Learn how your team can succeed with Docker.
- Find the perfect pricing for your team.
- Download the latest version of Docker Desktop.
- Visit Docker Resources to explore more materials.