Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

Docker State of App Dev: AI

AI is changing software development — but not how you think

The hype is real, but so are the challenges. Here’s what developers, teams, and tech leaders need to know about AI’s uneven, evolving role in software.

Rumors of AI’s pervasiveness in software development have been greatly exaggerated. A look under the hood shows adoption is far from uniform. While some dev teams are embedding AI into daily workflows, others are still kicking the tires or sitting it out entirely. Real-world usage reveals a nuanced picture shaped by industry, role, and data readiness.


Here are six key insights into AI tools and development from Docker’s second annual State of Application Development Survey, based on responses from over 4,500 industry professionals.

1. How are people using AI?

Right off the bat, we saw a split between two classes of respondents: 

  • Those who use AI tools like ChatGPT and GitHub Copilot for everyday work-related tasks such as writing, documentation, and research 
  • Those who build applications with AI/ML functionality

2. IT leads the way in AI tool usage and app development 

Only about 1 in 4 respondents (22%) report using AI tools for work. But there’s a huge spread across industries — from 1% to 84%. Among the top AI users are IT/SaaS folks (76%). And because we surveyed over three times more users this year than for last year’s report, the snapshot covers a broader spectrum of industries beyond just those focused on IT.

Underscoring tech’s embrace of AI: 34% of IT/SaaS respondents say they develop AI/ML apps, compared to just 8% outside that bubble.

And strategy reflects this gulf. Only 16% of companies outside IT report having a real AI strategy. Within tech, the number soars to 73%. Translation: AI is gaining traction, but it’s concentrated in certain industries — at least for now.

3. AI tools are overhyped — and incredibly useful

Here’s the paradox: 64% of users say AI tools make work easier, yet almost as many (59%) think AI tools are overhyped. The hype may be loud, but utility is speaking louder, especially for those who’ve stuck with it. In fact, 65% of current users say they’re using AI more than they did a year ago, and that same percentage use it every day.

This tracks roughly with findings in our 2024 report, in which 61% of respondents agreed AI made their job easier, even as 45% reported feeling AI was overhyped. And 65% agreed that AI was a positive option.

4. AI tool usage is up — and ChatGPT leads the pack

No surprises here. The most-used AI-powered tools are the same as in our 2024 survey — ChatGPT (especially among full-stack developers), GitHub Copilot, and Google Gemini. 

But usage this year far outstrips what users reported last year, with 80% selecting ChatGPT (versus 46% in our 2024 report), 53% Copilot (versus 30%), and 23% Gemini (versus 19%).

5. Developers don’t use AI the same way

The top overall use case is coding. Beyond that, it depends.

  • Seasoned devs turn to AI to write documentation and tests but use it sparingly. 
  • DevOps engineers use it for CLI help and writing docs.
  • Software devs tap AI to write tests and do research.

And not all devs lean on AI equally. Seasoned devs are the least reliant, most often rating themselves as not at all dependent (0/10), while DevOps engineers rate their dependence at 7/10. Software devs are somewhere in the middle, usually landing at a 5/10 on the dependence scale. For comparison, the overall average dependence on AI in our 2024 survey was about 4 out of 10 (all users).

Looking ahead, it will be interesting to see how dependence on AI shifts and becomes further integrated by role. 

6. Data is the bottleneck no one talks about

The use of AI/ML in app development is a new and rapidly growing phenomenon that, not surprisingly, brings new pain points. For teams building AI/ML apps, one headache stands out: data prep. A full 24% of AI builders say they’re not confident in how to identify or prepare the right datasets.

Even with the right intent and tools, teams hit friction where it hurts productivity most — upfront.

Bottom line:
We’re in the early stages of the next tech revolution — complex, fast-evolving, and rife of challenges. Developers are meeting it head-on, quickly ramping up on new tools and architectures, and driving innovation at every layer of the stack. And Docker is right there with them, empowering innovation every step of the way.

Docker Multi-Stage Builds for Python Developers: A Complete Guide

As a Python developer, you’ve probably experienced the pain of slow Docker builds, bloated images filled with build tools, and the frustration of waiting 10+ minutes for a simple code change to rebuild. Docker multi-stage builds solve these problems elegantly, and they’re particularly powerful for Python applications. In this comprehensive guide, we’ll explore how to […]

Docker State of App Dev: Security

Security is a team sport: why everyone owns it now

Six security takeaways from Docker’s 2025 State of Application Development Report.

In the evolving world of software development, one thing is clear — security is no longer a siloed specialty. It’s a team sport, especially when vulnerabilities strike. That’s one of several key security findings in the 2025 Docker State of Application Development Survey.

Here’s what else we learned about security from our second annual report, which was based on an online survey of over 4,500 industry professionals.

1. Security isn’t someone else’s problem

Forget the myth that only “security people” handle security. Across orgs big and small, roles are blending. If you’re writing code, you’re in the security game. As one respondent put it, “We don’t have dedicated teams — we all do it.” According to the survey, just 1 in 5 organizations outsource security. And it’s top of mind at most others: only 1% of respondents say security is not a concern at their organization.

One exception to this trend: In larger organizations (50 or more employees), software security is more likely to be the exclusive domain of security engineers, with other types of engineers playing less of a role.

2. Everyone thinks they’re in charge of security

Team leads from multiple corners report that they’re the ones focused on security. Seasoned developers are as likely to zero in on it as are mid-career security engineers. And they’re both right. Security has become woven into every function — devs, leads, and ops alike.

3. When vulnerabilities hit, it’s all hands on deck

No turf wars here. When scan alerts go off, everyone pitches in — whether it’s security engineers helping experienced devs to decode scan results, engineering managers overseeing the incident, or DevOps engineers filling in where needed.

Fixing vulnerabilities is also a major time suck. Among security-related tasks that respondents routinely deal with, it was the most selected option across all roles. Worth noting: Last year’s State of Application Development Survey identified security/vulnerability remediation tools as a key area where better tools were needed in the development process.

4. Security isn’t the bottleneck — planning and execution are

Surprisingly, security doesn’t crack the top 10 issues holding teams back. Planning and execution-type activities are bigger sticking points. Translation? Security is better integrated into the workflow than many give it credit for. 

5. Shift-left is yesterday’s news

The once-pervasive mantra of “shift security left” is now only the 9th most important trend. Has the shift left already happened? Is AI and cloud complexity drowning it out? Or is this further evidence that security is, by necessity, shifting everywhere?

Again, perhaps security tools have gotten better, making it easier to shift left. (Our 2024 survey identified the shift-left approach as a possible source of frustration for developers and an area where more effective tools could make a difference.) Or perhaps there’s simply broader acceptance of the shift-left trend.

6. Shifting security left may not be the buzziest trend, but it’s still influential

The impact of shifting security left pales beside more dominant trends such as Generative AI and infrastructure as code. But it’s still a strong influence for developers in leadership roles. 

Bottom line: Security is no longer a roadblock; it’s a reflex. Teams aren’t asking, “Who owns security?” — they’re asking, “How can we all do it better?”

Dear APIs, Meet POSTMAN : Your New Best Friend

In today’s digital world, APIs (Application Programming Interfaces) are the backbone of modern web and mobile applications. They enable communication between different software systems which helps in seamless sharing of data . As APIs continue to grow in complexity and importance, so does the need for efficient tools to test, manage, and collaborate on API […]

Update on the Docker DX extension for VS Code

It’s now been a couple of weeks since we released the new Docker DX extension for Visual Studio Code. This launch reflects a deeper collaboration between Docker and Microsoft to better support developers building containerized applications.

Over the past few weeks, you may have noticed some changes to your Docker extension in VS Code. We want to take a moment to explain what’s happening—and where we’re headed next.

What’s Changing?

The original Docker extension in VS Code is being migrated to the new Container Tools extension, maintained by Microsoft. It’s designed to make it easier to build, manage, and deploy containers—streamlining the container development experience directly inside VS Code.

As part of this partnership, it was decided to bundle the new Docker DX extension with the existing Docker extension, so that it would install automatically to make the process seamless.

While the automatic installation was intended to simplify the experience, we realize it may have caught some users off guard. To provide more clarity and choice, the next release will make Docker DX Extension an opt-in installation, giving you full control over when and how you want to use it. 

What’s New from Docker?

Docker is introducing the new Docker DX extension, focused on delivering a best-in-class authoring experience for Dockerfiles, Compose files, and Bake files

Key features include:

  • Dockerfile linting: Get build warnings and best-practice suggestions directly from BuildKit and Buildx—so you can catch issues early, right inside your editor.
  • Image vulnerability remediation (experimental): Automatically flag references to container images with known vulnerabilities, directly in your Dockerfiles.
  • Bake file support: Enjoy code completion, variable navigation, and inline suggestions when authoring Bake files—including the ability to generate targets based on your Dockerfile stages.
  • Compose file outline: Easily navigate and understand complex Compose files with a new outline view in the editor.

Better Together

These two extensions are designed to work side-by-side, giving you the best of both worlds:

  • Powerful tooling to build, manage, and deploy your containers
  • Smart, contextual authoring support for Dockerfiles, Compose files, and Bake files

And the best part? Both extensions are free and fully open source.

Thank You for Your Patience

We know changes like this can be disruptive. While our goal was to make the transition as seamless as possible, we recognize that the approach caused some confusion, and we sincerely apologize for the lack of early communication.

The teams at Docker and Microsoft are committed to delivering the best container development experience possible—and this is just the beginning.

Where Docker DX is Going Next

At Docker, we’re proud of the contributions we’ve made to the container ecosystem, including Dockerfiles, Compose, and Bake.

We’re committed to ensuring the best possible experience when editing these files in your IDE, with instant feedback while you work.

Here’s a glimpse of what’s coming:

  • Expanded Dockerfile checks: More best-practice validations, actionable tips, and guidance—surfaced right when you need them.
  • Stronger security insights: Deeper visibility into vulnerabilities across your Dockerfiles, Compose files, and Bake configurations.
  • Improved debugging and troubleshooting: Soon, you’ll be able to live debug Docker builds—step through your Dockerfile line-by-line, inspect the filesystem at each stage, see what’s cached, and troubleshoot issues faster.

We Want Your Feedback!

Your feedback is critical in helping us improve the Docker DX extension and your overall container development experience.

If you encounter any issues or have ideas for enhancements you’d like to see, please let us know:

We’re listening and excited to keep making things better for you! 

How to build and deliver an MCP server for production

In December of 2024, we published a blog with Anthropic about their totally new spec (back then) to run tools with AI agents: the Model Context Protocol, or MCP. Since then, we’ve seen an explosion in developer appetite to build, share, and run their tools with Agentic AI – all using MCP. We’ve seen new MCP clients pop up, and big players like Google and OpenAI committing to this standard. However, nearly immediately, early growing pains have led to friction when it comes down to actually building and using MCP tools. At the moment, we’ve hit a major bump in the road.

MCP Pain Points

  • Runtime:
    • Getting up and running with MCP servers is a headache for devs. The standard runtimes for MCP servers rely on a specific version of Python or NodeJS, and combining tools means managing those versions, on top of extra dependencies an MCP server may require.
  • Security:
    • Giving an LLM direct access to run software on the host system is unacceptable to devs outside of hobbyist environments. In the event of hallucinations or incorrect output, significant damage could be done.
    • Users are asked to configure sensitive data in plaintext json files. An MCP config file contains all of the necessary data for your agent to act on your behalf, but likewise it centralizes everything a bad actor needs to exploit your accounts.
  • Discoverability
    • The tools are out there, but there isn’t a single good place to find the best MCP servers. Marketplaces are beginning to crop up, but the developers are still required to hunt out good sources of tools for themselves.
    • Later on in the MCP user experience, it’s very easy to end up with enough servers and tools to overwhelm your LLM – leading to incorrect tools being used, and worse outcomes. When an LLM has the right tools for the job, it can execute more efficiently. When an LLM gets the wrong tools – or too many tools to decide, hallucinations spike while evals plummet.
  • Trust:
    • When the tools are run by LLMs on behalf of the developer, it’s critical to trust the publisher of MCP servers. The current MCP publisher landscape looks like a gold rush, and is therefore vulnerable to supply-chain attacks from untrusted authors.

Docker as an MCP Runtime

Docker is a tried and true runtime to stabilize the environment in which tools run. Instead of managing multiple Node or Python installations, using Dockerized MCP servers allows anyone with the Docker Engine to run MCP servers.

Docker provides sandboxed isolation for tools so that undesirable LLM behavior can’t damage the host configuration. The LLM has no access to the host filesystem for example, unless that MCP container is explicitly bound. 

The MCP Gateway

blog How Docker Revolutionizes MCP

In order for LLM’s to work autonomously, they need to be able to discover and run tools for themselves. This is nearly impossible using all of these MCP servers. Every time a new tool is added, a config file needs to be updated and the MCP client needs to be updated. The current workaround is to develop MCP servers which configure new MCP servers, but even this requires reloading. A much better approach is to simply use one MCP server: Docker. This MCP server acts as a gateway into a dynamic set of containerized tools. But how can tools be dynamic?

The MCP Catalog 

Catalog MCP dark

A dynamic set of tools in one MCP server means that users can go somewhere to add or remove MCP tools without modifying any config. This is achieved through a simple UI in Docker Desktop to maintain a list of tools which the MCP gateway can serve out. Users gain the ability to configure their MCP clients use hundreds of Dockerized servers all by “connecting” to the gateway MCP server. 

Much like Docker Hub, Docker MCP Catalog delivers a trusted, centralized hub to discover tools for developers. And for tool authors, that same hub becomes a critical distribution channel: a way to reach new users and ensure compatibility with platforms like Claude, Cursor, OpenAI, and VS Code. 

Docker Secrets

Finally, in order to securely pass access tokens and other secrets around containers, we’ve developed a feature as part of Docker Desktop to manage secrets. When configured, secrets are only exposed to the MCP’s container process. That means the secret won’t appear even when inspecting the running container. Allowing secrets to be kept scoped tightly to the tools that need them means you no longer risk big data breaches leaving MCP config files around.

Docker Desktop for Mac: QEMU Virtualization Option to be Deprecated in 90 Days

We are announcing the upcoming deprecation of QEMU as a virtualization option for Docker Desktop on Apple Silicon Macs. After serving as our legacy virtualization solution during the early transition to Apple Silicon, QEMU will be fully deprecated 90 days from today, on July 14, 2025. This deprecation does not affect QEMU’s role in emulating non-native architectures for multi-platform builds. By moving to Apple Virtualization Framework or Docker VMM, you will ensure optimal performance.

Why We’re Making This Change

Our telemetry shows that a very small percentage of users are still using the QEMU option. We’ve maintained QEMU support for backward compatibility, but both Docker VMM and Apple Virtualization Framework now offer:

  • Significantly better performance
  • Improved stability
  • Enhanced compatibility with macOS updates
  • Better integration with Apple Silicon architecture

What This Means For You

If you’re currently using QEMU as your Virtual Machine Manager (VMM) on Docker Desktop for Mac:

  • Your current installation will continue to work normally during the 90-day transition period
  • After July 1, 2025, Docker Desktop releases will automatically migrate your environment to Apple Virtualization Framework
  • You’ll experience improved performance and stability with the newer virtualization options

Migration Plan

The migration process will be smooth and straightforward:

  1. Users on the latest Docker Desktop release will be automatically migrated to Apple Virtualization Framework after the 90-day period
  2. During the transition period, you can manually switch to either Docker VMM (our fastest option for Apple Silicon Macs) or Apple Virtualization Framework through Settings > General > Virtual Machine Options
  3. For 30 days after the deprecation date, the QEMU option will remain available in settings for users who encounter migration issues
  4. After this extended period, the QEMU option will be fully removed

Note: This deprecation does not affect QEMU’s role in emulating non-native architectures for multi-platform builds.

What You Should Do Now

We recommend proactively switching to one of our newer VMM options before the automatic migration:

blog virtual machine options
  1. Update to the latest version of Docker Desktop for Mac
  2. Open Docker Desktop Settings > General
  3. Under “Choose Virtual Machine Manager (VMM)” select either:
    • Docker VMM (BETA) – Our fastest option for Apple Silicon Macs
    • Apple Virtualization Framework – A mature, high-performance alternative

Questions or Concerns?

If you have questions or encounter any issues during migration, please:

We’re committed to making this transition as seamless as possible while delivering the best development experience on macOS.

New Docker Extension for Visual Studio Code

Today, we are excited to announce the release of a new, open-source Docker Language Server and Docker DX VS Code extension. In a joint collaboration between Docker and the Microsoft Container Tools team, this new integration enhances the existing Docker extension with improved Dockerfile linting, inline image vulnerability checks, Docker Bake file support, and outlines for Docker Compose files. By working directly with Microsoft, we’re ensuring a native, high-performance experience that complements the existing developer workflow. It’s the next evolution of Docker tooling in VS Code — built to help you move faster, catch issues earlier, and focus on what matters most: building great software.

What’s the Docker DX extension?

The Docker DX extension is focused on providing developers with faster feedback as they edit. Whether you’re authoring a complex Compose file or fine-tuning a Dockerfile, the extension surfaces relevant suggestions, validations, and warnings in real time. 

Key features include:

  • Dockerfile linting: Get build warnings and best-practice suggestions directly from BuildKit and Buildx.
  • Image vulnerability remediation (experimental): Flags references to container images with known vulnerabilities directly in Dockerfiles.
  • Bake file support: Includes code completion, variable navigation, and inline suggestions for generating targets based on your Dockerfile stages.
  • Compose file outline: Easily navigate complex Compose files with an outline view in the editor.

If you’re already using the Docker VS Code extension, the new features are included — just update the extension and start using them!

Dockerfile linting and vulnerability remediation

The inline Dockerfile linting provides warnings and best-practice guidance for writing Dockerfiles from the experts at Docker, powered by Build Checks. Potential vulnerabilities are highlighted directly in the editor with context about their severity and impact, powered by Docker Scout.

blog dockerfile

Figure 1: Providing actionable recommendations for fixing vulnerabilities and optimizing Dockerfiles

Early feedback directly in Dockerfiles keeps you focused and saves you and your team time debugging and remediating later.

Docker Bake files

The Docker DX extension makes authoring and editing Docker Bake files quick and easy. It provides code completion, code navigation, and error reporting to make editing Bake files a breeze. The extension will also look at your Dockerfile and suggest Bake targets based on the build stages you have defined in your Dockerfile.

image

Figure 2: Editing Bake files is simple and intuitive with the rich language features that the Docker DX extension provides.

image 1

Figure 3: Creating new Bake files is straightforward as your Dockerfile’s build stages are analyzed and suggested as Bake targets.

Compose outlines

Quickly navigate complex Compose files with the extension’s support for outlines available directly through VS Code’s command palette.

blog docker compose outline

Figure 4: Navigate complex Compose files with the outline panel.

Don’t use VS Code? Try the Language Server!

The features offered by the Docker DX extension are powered by the brand-new Docker Language Server, built on the Language Server Protocol (LSP). This means the same smart editing experience — like real-time feedback, validation, and suggestions for Dockerfiles, Compose, and Bake files — is available in your favorite editor.

Wrapping up

Install the extension from Docker DX – Visual Studio Marketplace today! The functionality is also automatically installed with the existing Docker VS Code extension from Microsoft.

Share your feedback on how it’s working for you, and share what features you’d like to see next. If you’d like to learn more or contribute to the project, check out our GitHub repo.

Learn more

Mastering MCP Debugging with CLI Tools and jq

As developers, we often rely on Model Context Protocol (MCP) to facilitate powerful AI-based workflows. Although MCP is primarily designed for AI assistants, being able to manually inspect and debug MCP servers from the command line is a lifesaver during development. This guide will walk you through setting up your environment, listing available tools, making […]

Revisiting Docker Hub Policies: Prioritizing Developer Experience

April 8, 2025: No changes to Docker Hub rate limits at this time

We did not enforce the Docker Hub rate limit changes previously scheduled for April 1, 2025. The current limits—100 pulls per 6 hours for unauthenticated users and 200 pulls per 6 hours for Docker Personal users—will remain in place as we continue to evaluate developer feedback and identify appropriate long-term limits.

We will announce any future enforcement at least 6 months in advance.

At Docker, we are committed to ensuring that Docker Hub remains the best place for developers, engineering teams, and operations teams to build, share, and collaborate. As part of this, we previously announced plans to introduce image pull consumption fees and storage-based billing. After further evaluating how developers use Docker Hub and what will best support the ecosystem, we have refined our approach—one that prioritizes developer experience and enables developers to scale with confidence while reinforcing Docker Hub as the foundation of the cloud-native ecosystem.

What’s Changing?

We’re making important updates to our previously announced pull limits and storage policies to ensure Docker Hub remains a valuable resource for developers:

  • No More Pull Count Limits or Consumption Charges – We’re cancelling pull consumption charges entirely. Our focus is on making Docker Hub the best place for developers to build, share, and collaborate—ensuring teams can scale with confidence.
  • Unlimited Pull rates for Paid Users (As Announced Earlier) – Starting April 1, 2025, all paid Docker subscribers will have unlimited image pulls (with fair use limits) to ensure a seamless experience.
  • Storage Charges Delayed Indefinitely – Previously, we announced plans to introduce storage-based billing, but we have decided to indefinitely delay any storage charges. Instead, we are focusing on delivering new tools that will allow users to actively manage their storage usage. Once these tools are available, we will assess storage policies in the best interest of our users. If and when storage charges are introduced, we will provide a six-month notice, ensuring teams have ample time to adjust.

Why This Matters

  • The Best Place to Build and Share – Docker Hub remains the world’s leading container registry, trusted by over 20 million developers and organizations. We’re committed to keeping it the best place to distribute and consume software.
  • Growing the Ecosystem – We’re making these changes to support more developers, teams, and businesses as they scale, reinforcing Docker Hub as the foundation of the cloud-native world.
  • Investing in the Future – Our focus is on delivering more capabilities that help developers move faster, from better storage management to strengthening security to better protect the software supply chain.
  • Committed to Developers – Every decision we make is about strengthening the platform and enabling developers to build, share, and innovate without unnecessary barriers.

We appreciate your feedback, and we’re excited to keep evolving Docker Hub to meet the needs of developers and teams worldwide. Stay tuned for more updates, and as always—happy building! 

5 Revolutionary Cloud Tools Transforming Vocational Training

Traditional vocational training is evolving fast. Textbooks, rigid schedules, and costly equipment are giving way to dynamic, cloud-based platforms that bring hands-on learning to any device. From AI-driven welding simulators to cloud-powered automotive diagnostics, new tools are making skill development more accessible, cost-effective, and responsive to industry demands. These innovations aren’t just enhancing training – […]

Key Skills for Navigating the Future of Cloud Computing

Cloud computing still continues to be a dominant force in the technological world. Many rely on it for purposes such as data storage, networking, virtualization, and others. As such, there is an ongoing demand that should be fulfilled by those who are familiar with cloud computing technology. This guide will discuss the key skills that […]

Maximizing Docker Desktop: How Signing In Unlocks Advanced Features

Docker Desktop is more than just a local application for containerized development — it’s your gateway to an integrated suite of cloud-native tools that streamline the entire development workflow. While Docker Desktop can be used without signing in, doing so unlocks the full potential of Docker’s powerful, interconnected ecosystem. By signing in, you gain access to advanced features and services across Docker Hub, Build Cloud, Scout, and Testcontainers Cloud, enabling deeper collaboration, enhanced security insights, and scalable cloud resources. 

This blog post explores the full range of capabilities unlocked by signing in to Docker Desktop, connecting you to Docker’s integrated suite of cloud-native development tools. From enhanced security insights with Docker Scout to scalable build and testing resources through Docker Build Cloud and Testcontainers Cloud, signing in allows developers and administrators to fully leverage Docker’s unified platform.

Note that the following sections refer to specific Docker subscription plans. With Docker’s newly streamlined subscription plans — Docker Personal, Docker Pro, Docker Team, and Docker Business — developers and organizations can access a scalable suite of tools, from individual productivity boosters to enterprise-grade governance and security. Visit the Docker pricing page to learn more about how these plans support different team sizes and workflows. 

2400x1260 evergreen docker blog c

Benefits for developers when logged in

Docker Personal

  • Access to private repositories: Unlock secure collaboration through private repositories on Docker Hub, ensuring that your sensitive code and dependencies are managed securely across teams and projects.
  • Increased pull rate: Boost your productivity with an increased pull rate from Docker Hub (40 pulls/hour per user), ensuring smoother, uninterrupted development workflows without waiting on rate limits. The rate limit without authentication is 10 pulls/hour per IP.
  • Docker Scout CLI: Leverage Docker Scout to proactively secure your software supply chain with continuous security insights from code to production. By signing in, you gain access to powerful CLI commands that help prevent vulnerabilities before they reach production. 
  • Build Cloud and Testcontainers Cloud: Experience the full power of Docker Build Cloud and Testcontainers Cloud with free trials (7-day for Build Cloud, 30-day for Testcontainers Cloud). These trials give you access to scalable cloud infrastructure that speeds up image builds and enables more reliable integration testing.

Docker Pro/Team/Business 

For users with a paid Docker subscription, additional features are unlocked.

  • Unlimited pull rate: No Hub rate limit will be enforced for users with a paid subscription plan. 
  • Docker Scout base image recommendations: Docker Scout offers continuous recommendations for base image updates, empowering developers to secure their applications at the foundational level and fix vulnerabilities early in the development lifecycle.
dd signin f1
Figure 1: Docker Scout showing recommendations.
  • Docker Debug: The docker debug CLI command can help you debug containers, while the images contain the minimum required to run your application.
dd signin f2
FIgure 2: Docker debug CLI.

Docker Debug functionalities have also been integrated into the container view of the Docker Desktop UI.

dd signin f3
Figure 3: Debug functionalities integrated into the container view of Docker Desktop.
  • Synchronized file shares: Host to Docker Desktop VM file sharing via bind mounts can be quite slow for large codebases. Speed up your development cycle with synchronized file shares, allowing you to sync large codebases into containers quickly and efficiently without performance bottlenecks—helping developers iterate faster on critical projects.
dd signin f4
Figure 4: Synchronized file shares.
  • Additional free minutes for Docker Build Cloud: Docker Build Cloud helps developer teams speed up image builds by offloading the build process to the cloud. The following benefits are available for users depending on the subscription plan
    • Docker Pro: 200 mins/month per org
    • Docker Team: 500 mins/month per org
    • Docker Business: 1500 mins/month per org
  • Additional free minutes for Testcontainers Cloud: Testcontainers Cloud simplifies the process for developers to run reliable integration tests using real dependencies defined in code, whether on their laptops or within their team’s CI pipeline. Depending on the subscription plan, the following benefits are available for users:
    • Docker Pro: 100 mins/month per org
    • Docker Team: 500 mins/month per org
    • Docker Business: 1,500 mins/month per org

Benefits for administrators when your users are logged in

Docker Business

Security and governance

The Docker Business plan offers enterprise-grade security and governance controls, which are only applicable if users are signed in. As of Docker Desktop 4.35.0, these features include:

License management

Tracking usage for licensing purposes can be challenging for administrators due to Docker Desktop not requiring authentication by default. By ensuring all users are signed in, administrators can use Docker Hub’s organization members list to manage licenses effectively.

This can be coupled with Docker Business’s Single Sign-On and SCIM capabilities to ease this process further. 

Insights

Administrators and other stakeholders (such as engineering managers) must comprehensively understand Docker Desktop usage within their organization. With developers signed into Docker Desktop, admins gain actionable insights into usage, from feature adoption to image usage trends and login activity, helping administrators optimize team performance and security. A dashboard offering insights is now available to simplify monitoring. Contact your account rep to enable the dashboard.

Desktop Insights available when your users log in to your organization
Figure 5: Desktop Insights view when users log in to your organization.

Enforce sign-in for Docker Desktop

Docker Desktop includes a feature that allows administrators to require authentication at start-up. Admins can ensure that all developers sign in to access Docker Desktop, enabling full integration with Docker’s security and productivity features. Sign-in enforcement helps maintain continuous compliance with governance policies across the organization.

dd signin f5
Figure 6: Prompting sign in.

Developers can then click on the sign-in button, which takes them through the authentication flow. 

More information on how to enforce sign-in can be found in the documentation

Unlock the full potential of Docker’s integrated suite

Signing into Docker Desktop unlocks significant benefits for both developers and administrators, enabling teams to fully leverage Docker’s integrated, cloud-native suite. Whether improving productivity, securing the software supply chain, or enforcing governance policies, signing in maximizes the value of Docker’s unified platform — especially for organizations using Docker’s paid subscription plans.

Note that new features are introduced with each new release, so keep an eye on our blog and subscribe to the Docker Newsletter for the latest product and feature updates.

Up next

Model-Based Testing with Testcontainers and Jqwik

When testing complex systems, the more edge cases you can identify, the better your software performs in the real world. But how do you efficiently generate hundreds or thousands of meaningful tests that reveal hidden bugs? Enter model-based testing (MBT), a technique that automates test case generation by modeling your software’s expected behavior.

In this demo, we’ll explore the model-based testing technique to perform regression testing on a simple REST API.

We’ll use the jqwik test engine on JUnit 5 to run property and model-based tests. Additionally, we’ll use Testcontainers to spin up Docker containers with different versions of our application.

2400x1260 Testcontainers evergreen set 4

Model-based testing

Model-based testing is a method for testing stateful software by comparing the tested component with a model that represents the expected behavior of the system. Instead of manually writing test cases, we’ll use a testing tool that:

  • Takes a list of possible actions supported by the application
  • Automatically generates test sequences from these actions, targeting potential edge cases
  • Executes these tests on the software and the model, comparing the results

In our case, the actions are simply the endpoints exposed by the application’s API. For the demo’s code examples, we’ll use a basic service with a CRUD REST API that allows us to:

  • Find an employee by their unique employee number
  • Update an employee’s name
  • Get a list of all the employees from a department
  • Register a new employee
testcontainers model based f1
Figure 1: Finding an employee, updating their name, finding their department, and registering a new employee.

Once everything is configured and we finally run the test, we can expect to see a rapid sequence of hundreds of requests being sent to the two stateful services:

testcontainers model based f2
Figure 2: New requests being sent to the two stateful services.

Docker Compose

Let’s assume we need to switch the database from Postgres to MySQL and want to ensure the service’s behavior remains consistent. To test this, we can run both versions of the application, send identical requests to each, and compare the responses.

We can set up the environment using a Docker Compose that will run two versions of the app:

  • Model (mbt-demo:postgres): The current live version and our source of truth.
  • Tested version (mbt-demo:mysql): The new feature branch under test.
services:
  ## MODEL
  app-model:
      image: mbt-demo:postgres
      # ...
      depends_on:
          - postgres
  postgres:
      image: postgres:16-alpine
      # ...
      
  ## TESTED
  app-tested:
    image: mbt-demo:mysql
    # ...
    depends_on:
      - mysql
  mysql:
    image: mysql:8.0
    # ...

Testcontainers

At this point, we could start the application and databases manually for testing, but this would be tedious. Instead, let’s use Testcontainers’ ComposeContainer to automate this with our Docker Compose file during the testing phase.

In this example, we’ll use jqwik as our JUnit 5 test runner. First, let’s add the jqwik and Testcontainers and the jqwik-testcontainers dependencies to our pom.xml:

<dependency>
    <groupId>net.jqwik</groupId>
    <artifactId>jqwik</artifactId>
    <version>1.9.0</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>net.jqwik</groupId>
    <artifactId>jqwik-testcontainers</artifactId>
    <version>0.5.2</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupId>org.testcontainers</groupId>
    <artifactId>testcontainers</artifactId>
    <version>1.20.1</version>
    <scope>test</scope>
</dependency>

As a result, we can now instantiate a ComposeContainer and pass our test docker-compose file as argument:

@Testcontainers
class ModelBasedTest {

    @Container
    static ComposeContainer ENV = new ComposeContainer(new File("src/test/resources/docker-compose-test.yml"))
       .withExposedService("app-tested", 8080, Wait.forHttp("/api/employees").forStatusCode(200))
       .withExposedService("app-model", 8080, Wait.forHttp("/api/employees").forStatusCode(200));

    // tests
}

Test HTTP client

Now, let’s create a small test utility that will help us execute the HTTP requests against our services:

class TestHttpClient {
  ApiResponse<EmployeeDto> get(String employeeNo) { /* ... */ }
  
  ApiResponse<Void> put(String employeeNo, String newName) { /* ... */ }
  
  ApiResponse<List<EmployeeDto>> getByDepartment(String department) { /* ... */ }
  
  ApiResponse<EmployeeDto> post(String employeeNo, String name) { /* ... */ }

    
  record ApiResponse<T>(int statusCode, @Nullable T body) { }
    
  record EmployeeDto(String employeeNo, String name) { }
}

Additionally, in the test class, we can declare another method that helps us create TestHttpClients for the two services started by the ComposeContainer:

static TestHttpClient testClient(String service) {
  int port = ENV.getServicePort(service, 8080);
  String url = "http://localhost:%s/api/employees".formatted(port);
  return new TestHttpClient(service, url);
}

jqwik

Jqwik is a property-based testing framework for Java that integrates with JUnit 5, automatically generating test cases to validate properties of code across diverse inputs. By using generators to create varied and random test inputs, jqwik enhances test coverage and uncovers edge cases.

If you’re new to jqwik, you can explore their API in detail by reviewing the official user guide. While this tutorial won’t cover all the specifics of the API, it’s essential to know that jqwik allows us to define a set of actions we want to test.

To begin with, we’ll use jqwik’s @Property annotation — instead of the traditional @Test — to define a test:

@Property
void regressionTest() {
  TestHttpClient model = testClient("app-model");
  TestHttpClient tested = testClient("app-tested");
  // ...
}

Next, we’ll define the actions, which are the HTTP calls to our APIs and can also include assertions.

For instance, the GetOneEmployeeAction will try to fetch a specific employee from both services and compare the responses:

record ModelVsTested(TestHttpClient model, TestHttpClient tested) {}

record GetOneEmployeeAction(String empNo) implements Action<ModelVsTested> {
  @Override
  public ModelVsTested run(ModelVsTested apps) {
    ApiResponse<EmployeeDto> actual = apps.tested.get(empNo);
    ApiResponse<EmployeeDto> expected = apps.model.get(empNo);

    assertThat(actual)
      .satisfies(hasStatusCode(expected.statusCode()))
      .satisfies(hasBody(expected.body()));
    return apps;
  }
}

Additionally, we’ll need to wrap these actions within Arbitrary objects. We can think of Arbitraries as objects implementing the factory design pattern that can generate a wide variety of instances of a type, based on a set of configured rules.

For instance, the Arbitrary returned by employeeNos() can generate employee numbers by choosing a random department from the configured list and concatenating a number between 0 and 200:

static Arbitrary<String> employeeNos() {
  Arbitrary<String> departments = Arbitraries.of("Frontend", "Backend", "HR", "Creative", "DevOps");
  Arbitrary<Long> ids = Arbitraries.longs().between(1, 200);
  return Combinators.combine(departments, ids).as("%s-%s"::formatted);
}

Similarly, getOneEmployeeAction() returns an Aribtrary action based on a given Arbitrary employee number:

static Arbitrary<GetOneEmployeeAction> getOneEmployeeAction() {
  return employeeNos().map(GetOneEmployeeAction::new);
}

After declaring all the other Actions and Arbitraries, we’ll create an ActionSequence:

@Provide
Arbitrary<ActionSequence<ModelVsTested>> mbtJqwikActions() {
  return Arbitraries.sequences(
    Arbitraries.oneOf(
      MbtJqwikActions.getOneEmployeeAction(),
      MbtJqwikActions.getEmployeesByDepartmentAction(),
      MbtJqwikActions.createEmployeeAction(),
      MbtJqwikActions.updateEmployeeNameAction()
  ));
}


static Arbitrary<Action<ModelVsTested>> getOneEmployeeAction() { /* ... */ }
static Arbitrary<Action<ModelVsTested>> getEmployeesByDepartmentAction() { /* ... */ }
// same for the other actions

Now, we can write our test and leverage jqwik to use the provided actions to test various sequences. Let’s create the ModelVsTested tuple and use it to execute the sequence of actions against it:

@Property
void regressionTest(@ForAll("mbtJqwikActions") ActionSequence<ModelVsTested> actions) {
  ModelVsTested testVsModel = new ModelVsTested(
    testClient("app-model"),
    testClient("app-tested")
  );
  actions.run(testVsModel);
}

That’s it — we can finally run the test! The test will generate a sequence of thousands of requests trying to find inconsistencies between the model and the tested service:

INFO com.etr.demo.utils.TestHttpClient -- [app-tested] PUT /api/employeesFrontend-129?name=v
INFO com.etr.demo.utils.TestHttpClient -- [app-model] PUT /api/employeesFrontend-129?name=v
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees/Frontend-129
INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees/Frontend-129
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] POST /api/employees { name=sdxToS, empNo=Frontend-91 }
INFO com.etr.demo.utils.TestHttpClient -- [app-model] POST /api/employees { name=sdxToS, empNo=Frontend-91 }
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] PUT /api/employeesFrontend-4?name=PZbmodNLNwX
INFO com.etr.demo.utils.TestHttpClient -- [app-model] PUT /api/employeesFrontend-4?name=PZbmodNLNwX
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees/Frontend-4
INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees/Frontend-4
INFO com.etr.demo.utils.TestHttpClient -- [app-tested] GET /api/employees?department=ٺ⯟桸
INFO com.etr.demo.utils.TestHttpClient -- [app-model] GET /api/employees?department=ٺ⯟桸
        ...

Catching errors

If we run the test and check the logs, we’ll quickly spot a failure. It appears that when searching for employees by department with the argument ٺ⯟桸 the model produces an internal server error, while the test version returns 200 OK:

Original Sample
---------------
actions:
ActionSequence[FAILED]: 8 actions run [
    UpdateEmployeeAction[empNo=Creative-13, newName=uRhplM],
    CreateEmployeeAction[empNo=Backend-184, name=aGAYQ],
    UpdateEmployeeAction[empNo=Backend-3, newName=aWCxzg],
    UpdateEmployeeAction[empNo=Frontend-93, newName=SrJTVwMvpy],
    UpdateEmployeeAction[empNo=Frontend-129, newName=v],
    CreateEmployeeAction[empNo=Frontend-91, name=sdxToS],
    UpdateEmployeeAction[empNo=Frontend-4, newName=PZbmodNLNwX],
    GetEmployeesByDepartmentAction[department=ٺ⯟桸]
]
    final currentModel: ModelVsTested[model=com.etr.demo.utils.TestHttpClient@5dc0ff7d, tested=com.etr.demo.utils.TestHttpClient@64920dc2]
Multiple Failures (1 failure)
    -- failure 1 --
    expected: 200
    but was: 500

Upon investigation, we find that the issue arises from a native SQL query using Postgres-specific syntax to retrieve data. While this was a simple issue in our small application, model-based testing can help uncover unexpected behavior that may only surface after a specific sequence of repetitive steps pushes the system into a particular state.

Wrap up

In this post, we provided hands-on examples of how model-based testing works in practice. From defining models to generating test cases, we’ve seen a powerful approach to improving test coverage and reducing manual effort. Now that you’ve seen the potential of model-based testing to enhance software quality, it’s time to dive deeper and tailor it to your own projects.

Clone the repository to experiment further, customize the models, and integrate this methodology into your testing strategy. Start building more resilient software today!

Thank you to Emanuel Trandafir for contributing this post.

Learn more

2024 Docker State of Application Development Survey: Share Your Thoughts on Development

Welcome to the third annual Docker State of Application Development survey!

Please help us better understand and serve the application development  community with just 20-30 minutes of your time. We want to know where you’re focused, what you’re working on, and what is most important to you. Your thoughts and feedback will help us build the best products and experiences for you.

Docker logo in white box surrounded by simple chart and graph icons

And, we don’t just keep this information for ourselves — we share with you1! We hope you saw our recent report on the 2023 State of Application Development Survey. The engagement of our community allowed us to better understand where developers are facing challenges, what tools they like, and what they’re excited about. We’ve been using this information to give our community the tools and features they need.

Take the Docker State of Application Development survey now!

By participating in the survey, you can be entered into a raffle for a chance to win2 one of the following prizes:

Additionally, the first 200 respondents to complete the survey will receive an exclusive pair of Docker socks!

The survey is open from September 23rd, 2024 (7AM PST) to November 20, 2024 (11:59PM PST)

We’ll choose the winners randomly in accordance with the promotion official rules.* Winners will be notified via email by January 10, 2025.

The Docker State of Application Development Survey only takes about 20-30 minutes to complete. We appreciate every contribution and opinion. Your voice counts!


  1. Data will be reported publicly only in aggregate and without personally identifying information. ↩︎
  2. Docker State of Application Development Promotion Official Rules. ↩︎

Secure by Design for AI: Building Resilient Systems from the Ground Up

As artificial intelligence (AI) has erupted, Secure by Design for AI has emerged as a critical paradigm. AI is integrating into every aspect of our lives — from healthcare and finance to developers to autonomous vehicles and smart cities — and its integration into critical infrastructure has necessitated that we move quickly to understand and combat threats. 

Necessity of Secure by Design for AI

AI’s rapid integration into critical infrastructure has accelerated the need to understand and combat potential threats. Security measures must be embedded into AI products from the beginning and evolve as the model evolves. This proactive approach ensures that AI systems are resilient against emerging threats and can adapt to new challenges as they arise. In this article, we will explore two polarizing examples — the developer industry and the healthcare industry.

Black padlock on light blue digital background

Complexities of threat modeling in AI

AI brings forth new challenges and conundrums when working on an accurate threat model. Before reaching a state in which the data has simple edit and validation checks that can be programmed systematically, AI validation checks need to learn with the system and focus on data manipulation, corruption, and extraction. 

  • Data poisoning: Data poisoning is a significant risk in AI, where the integrity of the data used by the system can be compromised. This can happen intentionally or unintentionally and can lead to severe consequences. For example, bias and discrimination in AI systems have already led to issues, such as the wrongful arrest of a man in Detroit due to a false facial recognition match. Such incidents highlight the importance of unbiased models and diverse data sets. Testing for bias and involving a diverse workforce in the development process are critical steps in mitigating these risks.

In healthcare, for example, bias may be simpler to detect. You can examine data fields based on areas such as gender, race, etc. 

In development tools, bias is less clear-cut. Bias could result from the underrepresentation of certain development languages, such as Clojure. Bias may even result from code samples based on regional differences in coding preferences and teachings. In developer tools, you likely won’t have the information available to detect this bias. IP addresses may give you information about where a person is living currently, but not about where they grew up or learned to code. Therefore, detecting bias will be more difficult. 

  • Data manipulation: Attackers can manipulate data sets with malicious intent, altering how AI systems behave. 
  • Privacy violations: Without proper data controls, personal or sensitive information could unintentionally be introduced into the system, potentially leading to privacy violations. Establishing strong data management practices to prevent such scenarios is crucial.
  • Evasion and abuse: Malicious actors may attempt to alter inputs to manipulate how an AI system responds, thereby compromising its integrity. There’s also the potential for AI systems to be abused in ways developers did not anticipate. For example, AI-driven impersonation scams have led to significant financial losses, such as the case where an employee transferred $26 million to scammers impersonating the company’s CFO.

These examples underscore the need for controls at various points in the AI data lifecycle to identify and mitigate “bad data” and ensure the security and reliability of AI systems.

Key areas for implementing Secure by Design in AI

To effectively secure AI systems, implementing controls in three major areas is essential (Figure 1):

Illustration showing flow of data from Users to Data Management to Model Tuning to Model Maintenance.
Figure 1: Key areas for implementing security controls.

1. Data management

The key to data management is to understand what data needs to be collected to train the model, to identify the sensitive data fields, and to prevent the collection of unnecessary data. Data management also involves ensuring you have the correct checks and balances to prevent the collection of unneeded data or bad data.

In healthcare, sensitive data fields are easy to identify. Doctors offices often collect national identifiers, such as drivers licenses, passports, and social security numbers. They also collect date of birth, race, and many other sensitive data fields. If the tool is aimed at helping doctors identify potential conditions faster based on symptoms, you would need anonymized data but would still need to collect certain factors such as age and race. You would not need to collect national identifiers.

In developer tools, sensitive data may not be as clearly defined. For example, an environment variable may be used to pass secrets or pass confidential information, such as the image name from the developer to the AI tool. There may be secrets in fields you would not suspect. Data management in this scenario involves blocking the collection of fields where sensitive data could exist and/or ensuring there are mechanisms to scrub sensitive data built into the tool so that data does not make it to the model. 

Data management should include the following:

  • Implementing checks for unexpected data: In healthcare, this process may involve “allow” lists for certain data fields to prevent collecting irrelevant or harmful information. In developer tools, it’s about ensuring the model isn’t trained on malicious code, such as unsanitized inputs that could introduce vulnerabilities.
  • Evaluating the legitimacy of users and their activities: In healthcare tools, this step could mean verifying that users are licensed professionals, while in developer tools, it might involve detecting and mitigating the impact of bot accounts or spam users.
  • Continuous data auditing: This process ensures that unexpected data is not collected and that the data checks are updated as needed. 

2. Alerting and monitoring 

With AI, alerting and monitoring is imperative to ensuring the health of the data model. Controls must be both adaptive and configurable to detect anomalous and malicious activities. As AI systems grow and adapt, so too must the controls. Establish thresholds for data, automate adjustments where possible, and conduct manual reviews where necessary.

In a healthcare AI tool, you might set a threshold before new data is surfaced to ensure its accuracy. For example, if patients begin reporting a new symptom that is believed to be associated with diabetes, you may not report this to doctors until it is reported by a certain percentage (15%) of total patients. 

In a developer tool, this might involve determining when new code should be incorporated into the model as a prompt for other users. The model would need to be able to log and analyze user queries and feedback, track unhandled or poorly handled requests, and detect new patterns in usage. Data should be analyzed for high frequencies of unhandled prompts, and alerts should be generated to ensure that additional data sets are reviewed and added to the model.

3. Model tuning and maintenance

Producers of AI tools should regularly review and adjust AI models to ensure they remain secure. This includes monitoring for unexpected data, adjusting algorithms as needed, and ensuring that sensitive data is scrubbed or redacted appropriately.

For healthcare, model tuning may be more intensive. Results may be compared to published medical studies to ensure that patient conditions are in line with other baselines established across the world. Audits should also be conducted to ensure that doctors with reported malpractice claims or doctors whose medical license has been revoked are scrubbed from the system to ensure that potentially compromised data sets are not influencing the model. 

In a developer tool, model tuning will look very different. You may look at hyperparameter optimization using techniques such as grid search, random search, and Bayesian search. You may study subsets of data; for example, you may perform regular reviews of the most recent data looking for new programming languages, frameworks, or coding practices. 

Model tuning and maintenance should include the following:

  • Perform data audits to ensure data integrity and that unnecessary data is not inadvertently being collected. 
  • Review whether “allow” lists and “deny” lists need to be updated.
  • Regularly audit and monitor alerts for algorithms to determine if adjustments need to be made; consider the population of your user base and how the model is being trained when adjusting these parameters.
  • Ensure you have the controls in place to isolate data sets for removal if a source has become compromised; consider unique identifiers that allow you to identify a source without providing unnecessary sensitive data.
  • Regularly back up data models so you can return to a previous version without heavy loss of data if the source becomes compromised.

AI security begins with design

Security must be a foundational aspect of AI development, not an afterthought. By identifying data fields upfront, conducting thorough AI threat modeling, implementing robust data management controls, and continuously tuning and maintaining models, organizations can build AI systems that are secure by design. 

This approach protects against potential threats and ensures that AI systems remain reliable, trustworthy, and compliant with regulatory requirements as they evolve alongside their user base.

Learn more

Announcing Upgraded Docker Plans: Simpler, More Value, Better Development and Productivity 

At Docker, our mission is to empower development teams by providing the tools they need to ship secure, high-quality apps — FAST. Over the past few years, we’ve continually added value for our customers, responding to the evolving needs of individual developers and organizations alike. Today, we’re excited to announce significant updates to our Docker subscription plans that will deliver even more value, flexibility, and power to your development workflows.

2400x1260 evergreen docker blog d

Docker accelerating the inner loop

We’ve listened closely to our community, and the message is clear: Developers want tools that meet their current needs and evolve with new capabilities to meet their future needs. 

That’s why we’ve revamped our plans to include access to ALL the tools our most successful customers are leveraging — Docker Desktop, Docker Hub, Docker Build Cloud, Docker Scout, and Testcontainers Cloud. Our new unified suite makes it easier for development teams to access everything they need under one subscription with included consumption for each new product and the ability to add more as they need it. This gives every paid user full access, including consumption-based options, allowing developers to scale resources as their needs evolve. Whether customers are individual developers, members of small teams, or work in large enterprises, the refreshed Docker Personal, Docker Pro, Docker Team, and Docker Business plans ensure developers have the right tools at their fingertips.

These changes increase access to Docker Hub across the board, bring more value into Docker Desktop, and grant access to the additional value and new capabilities we’ve delivered to development teams over the past few years. From Docker Scout’s advanced security and software supply chain insights to Docker Build Cloud’s productivity-generating cloud build capabilities, Docker provides developers with the tools to build, deploy, and verify applications faster and more efficiently.

Areas we’ve invested in during the past year include:

  • The world’s largest container registry. To date, Docker has invested more than $100 million in Docker Hub, which currently stores over 60 petabytes of data and handles billions of pulls each month. We have improved content discoverability, in-depth image analysis, image lifecycle management, and an even broader range of verified high-assurance content on Docker Hub. 
  • Improved insights. From Builds View to inspecting GitHub Actions builds to Build Checks to Scout health scores, we’re providing teams with more visibility into their usage and providing insights to improve their development outcomes. We have additional Docker Desktop insights coming later this year.
  • Securing the software supply chain. In October 2023, we launched Docker Scout, allowing developers to continuously address security issues before they hit production through policy evaluation and recommended remediations, and track the SBOM of their software. We later introduced new ways for developers to quickly assess image health and accelerate application security improvements across the software supply chain.
  • Container-based testing automation. In December 2023, we acquired AtomicJar, makers of Testcontainers, adding container-based testing automation to our portfolio. Testcontainers Cloud offers enterprise features and a scalable, cloud-based infrastructure that provides a consistent Testcontainers experience across the org and centralizes monitoring.
  • Powerful cloud-based builders. In January 2024, we launched Docker Build Cloud, combining powerful, native ARM & AMD cloud builders with shared cache that accelerates build times by up to 39x.
  • Security, control, and compliance for businesses. For our Docker Business subscribers, we’ve enhanced security and compliance features, ensuring that large teams can work securely and efficiently. Role-based access control (RBAC), SOC 2 Type 2 compliance, centralized management, and compliance reporting tools are just a few of the features that make Docker Business the best choice for enterprise-grade development environments. And soon, we are rolling out organizational access tokens to make developer access easier at the organizational level, enhancing security and efficiency.
  • Empowering developers to build AI applications. From introducing a new GenAI Stack to our extension for GitHub Copilot and our partnership with NVIDIA to our series of AI tips content, Docker is simplifying AI application development for our community. 

As we introduce new features and continue to provide — and improve on — the world’s largest container registry, the resources to do so also grow. With the rollout of our unified suites, we’re also updating our pricing to reflect the additional value. Here’s what’s changing at a high level: 

  • Docker Business pricing stays the same but gains the additional value and features announced today.
  • Docker Personal remains — and will always remain — free. This plan will continue to be improved upon as we work to grant access to a container-first approach to software development for all developers. 
  • Docker Pro will increase from $5/month to $9/month and Docker Team prices will increase from $9/user/month to $15/user/mo (annual discounts). Docker Business pricing remains the same.
  • We’re introducing image pull and storage limits for Docker Hub. This will impact less than 3% of accounts, the highest commercial consumers. For many of our Docker Team and Docker Business customers with Service Accounts, the new higher image pull limits will eliminate previously incurred fees.   
  • Docker Build Cloud minutes and Docker Scout analyzed repos are now included, providing enough minutes and repos to enhance the productivity of a development team throughout the day.  
  • Implementing consumption-based pricing for all integrated products, including Docker Hub, to provide flexibility and scalability beyond the plans.  

More value at every level

Our updated plans are packed with more features, higher usage limits, and simplified pricing, offering greater value at every tier. Our updated plans include: 

  • Docker Desktop: We’re expanding on Docker Desktop as the industry-leading container-first development solution with advanced security features, seamless cloud-native compatibility, and tools that accelerate development while supporting enterprise-grade administration.
  • Docker Hub: Docker subscriptions cover Hub essentials, such as private and public repo usage. To ensure that Docker Hub remains sustainable and continues to grow as the world’s largest container registry, we’re introducing consumption-based pricing for image pulls and storage. This update also includes enhanced usage monitoring tools, making it easier for customers to understand and manage usage.
View of the Usage dashboard
The Pulls Usage dashboard is now live on Docker Hub, allowing customers to see an organization’s Hub pull data.
  • Docker Build Cloud: We’ve removed the per-seat licenses for Build Cloud and increased the included build minutes for Pro, Team, and Business plans — enabling faster, more efficient builds across projects. Customers will have the option to add build minutes as their needs grow, but they will be surprised at how much time they save with our speedy builders. For customers using CI tools, Build Cloud’s speed can even help save on CI bills. 
  • Docker Scout: Docker Team and Docker Business plans will offer continuous vulnerability analysis for an unlimited number of Scout-enabled repositories. The integration of Docker Scout’s health scores into Docker Pro, Team, and Business plans helps customers maintain security and compliance with ease.
  • Testcontainers Cloud: Testcontainers Cloud helps customers streamline testing workflows, saving time and resources. We’ve removed the per-seat licenses for Testcontainers Cloud under the new plans and included cloud runtime minutes for Docker Pro, Docker Team, and Docker Business, available to use for Docker Desktop or in CI workflows. Customers will have the option to add runtime minutes as their needs grow.

Looking ahead

Docker continues to innovate and invest in our products, and Docker has been recognized most recently as developers’ most used, desired, and admired developer tool in the 2024 Stack Overflow Developer Survey.  

These updates are just the beginning of our ongoing commitment to providing developers with the best tools in the industry. As we continue to invest in our tools and technologies, development teams can expect even more enhancements that will empower them to achieve their development goals. 

New plans take effect starting November 15, 2024. The Docker Hub plan limits will take effect on Feb 1, 2025. No charges on Docker Hub image pulls or storage will be incurred between November 15, 2024, and January 31, 2025. For existing annual and month-to-month customers, these new plan entitlements will take effect at their next renewal date that occurs on or after November 15, 2024, giving them ample time to review and understand the new offerings. Learn more about the new Docker subscriptions and see a detailed breakdown of features in each plan. We’re committed to ensuring a smooth transition and are here to support customers every step of the way. 

Stay tuned for more updates or reach out to learn more. And as always, thank you for being a part of the Docker community. 


FAQ  

  1. I’m a Docker Business customer, what is new in my plan? 

Docker Business list pricing remains the same, but you will now have access to more of Docker’s products:  

  • Instead of paying an additional per-seat fee, Docker Build Cloud is now available to all users in your Docker plan. Learn how to use Build Cloud
  • Docker Build Cloud included minutes are increasing from 800/mo to 1500/mo. 
  • Docker Scout now includes unlimited repos with continuous vulnerability analysis, an increase from 3. Get started with Docker Scout quickstart
  • 1500 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.
  • Docker Hub image pull rate limits have been removed.
  • 1M Docker Hub pulls per month are included. 

If you require additional Build Cloud minutes, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing. See the pricing page for more details. 

  1. I’m a Docker Team customer, what is new in my plan? 

Docker Team will now include the following benefits:  

  • Instead of paying an additional per-seat fee, Docker Build Cloud is now available to all users in your Docker plan. Learn how to use Build Cloud
  • Docker Build Cloud minutes are increasing from 400/mo to 500/mo.
  • Docker Scout now includes unlimited repos with continuous vulnerability analysis, an increase from 3. Get started with Docker Scout quickstart
  • 500 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.  
  • Docker Hub image pull rate limits will be removed.
  • 100K Docker Hub pulls per month are included.
  • The minimum number of users is 1 (lowered from 5)

Docker Team price will increase from $9/user/month (annual) to $15/user/mo (annual) and from $11/user/month (monthly) to $16/user/month (monthly). If you require additional Build Cloud minutes, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing, or reach out to sales for invoice pricing. See the pricing page for more details. 

  1. I’m a Docker Pro customer, what is new in my plan? 

Docker Pro will now include: 

  • Docker Build Cloud minutes increased from 100/month to 200/month and no monthly fee. Learn how to use Build Cloud.
  • 2 included repos with continuous vulnerability analysis in Docker Scout. Get started with Docker Scout quickstart.  
  • 100 Testcontainers Cloud runtime minutes are now included for use either in Docker Desktop or for CI.
  • Docker Hub image pull rate limits will be removed. 
  • 25K Docker Hub pulls per month are included.

Docker Pro plans will increase from $5/month (annual) to $9/month (annual) and from $7/month (monthly) to $11/month (monthly). If you require additional Build Cloud minutes, Docker Scout repos, Testcontainers Cloud runtime minutes, or Hub pulls or storage, you can add these to your plan with consumption-based pricing. See the pricing page for more details. 

  1. I’m a Docker Personal user, what is included in my plan? 

Docker Personal plans remain free.

When you are logged into your account, you will see additional features and entitlements: 

  • 1 included repo with continuous vulnerability analysis in Docker Scout. Get started with Docker Scout quickstart.
  • Unlimited public Docker Hub repos. 
  • 1 private Docker Hub repo with 2GB storage. 
  • Updated Docker Hub image pull rate limit of 40 pulls/hr/user.

Unauthenticated users will be limited to 10 Docker Hub pulls/hr/IP address.  

Docker Personal users who want to start or continue using Docker Build Cloud may trial the service for seven days, or upgrade to a Docker Pro plan. Docker Personal users may trial Testcontainers Cloud for 30 days. 

  1. Where do I learn more about Docker Hub rate limits and storage changes? 

Check your plan’s details on the new plans overview page. For now, see the new Docker Hub Pulls Usage dashboard to understand your current usage.  

  1. When will new pricing go into effect? 

New pricing will go into effect on November 15, 2024, for all new customers. 

For all existing customers, new pricing will take effect on your next renewal date after November 15, 2024. When you renew, you will receive the benefits and entitlements of the new plans. Between now and your renewal date, your existing plan details will apply. 

  1. Can I keep my existing plan? 

If you are on an annual contract, you will keep your current plan and pricing until your next renewal date that falls after November 15, 2024. 

If you are a month-to-month customer, you may convert to an annual contract before November 14 to stay on your existing plan. You may choose between staying on your existing plan entitlements or the new comprehensive plans. After November 15, all month-to-month renewals will be on the new plans. 

  1. I have a regulatory constraint, is it possible to disable individual services? 

While most organizations will see reduced build times and improved supply chain security, some organizations may have constraints that prevent them from using all of Docker’s services. 

After November 15, the default configurations for Docker Desktop, Docker Hub, Docker Build Cloud, and Docker Scout are enabled for all users. The default configuration for Testcontainers Cloud is disabled. To change your organization’s configuration, the org owner or one of your org admins will be able to disable Docker Scout or Build Cloud in the admin console. 

  1. Can I get a refund on individual products I pay for today (Build Cloud, Scout repos, Testcontainers Cloud)? 

Your current plan will remain in effect until your first renewal date on or after November 15, 2024, for annual customers. At that time, your plan will automatically reflect your new entitlements for Docker Build Cloud and Docker Scout. If you are a current Testcontainers Cloud customer in addition to being a Docker Pro, Docker Team, or Docker Business customer, let your account manager know your org ID so that your included minutes can be applied starting November 15.  

  1.  How do I get more help? 

If you have additional questions not addressed in the FAQ, contact your Docker Account Executive or CSM.  

If you need help identifying those contacts or need technical assistance, contact support.

❌