Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Optimizing AI Application Development with Docker Desktop and NVIDIA AI Workbench

26 août 2024 à 16:00

Are you looking to streamline how to incorporate LLMs into your applications? Would you prefer to do this using the products and services you’re already familiar with? This is where Docker Desktop, especially when paired with the advanced capabilities offered by Docker’s Business subscription tier, comes into play — particularly when combined with NVIDIA’s cutting-edge technology.

Imagine a development environment where setting up and managing AI workloads is as intuitive as the everyday tools you’re already using. With our deepening partnership with NVIDIA, we are committed to making this a reality. This collaboration not only enhances your ability to leverage Docker containers but also significantly improves your overall experience of building and developing AI applications.

What’s more, this partnership is designed to support your long-term growth and innovation goals. Docker Desktop with Docker Business, combined with NVIDIA software, provides the perfect launchpad for developers who want to accelerate their AI development journey — whether it’s building prototypes or deploying enterprise-grade AI applications. This isn’t just about providing tools; it’s about investing in your abilities, your career, and the innovation capabilities of your organization.

With Docker Business, you gain access to advanced capabilities that enhance security, streamline management, and offer unparalleled support. Meanwhile, NVIDIA AI Workbench provides a robust, containerized environment tailored for AI and machine learning projects. Together, these solutions empower you to push the boundaries of what’s possible, bringing AI into your applications more effortlessly and effectively.

docker nvidia 2400x1260 1

What is NVIDIA AI Workbench?

NVIDIA AI Workbench is a free developer toolkit powered by containers that enables data scientists and developers to create, collaborate, and migrate AI workloads and development environments across GPU systems. It targets scenarios like model fine-tuning, data science workflows, retrieval-augmented generation, and more. Users can install it on multiple systems but drive everything from a client application that runs locally on Windows, Ubuntu, and macOS. NVIDIA AI Workbench helps enable collaboration and distribution through Git-based platforms, like GitHub and GitLab. 

How does Docker Desktop relate to NVIDIA AI Workbench?

NVIDIA AI Workbench requires a container runtime. Docker’s container runtime (Docker Engine), delivered through Docker Desktop, is the recommended AI Workbench runtime for developers using AI Workbench on Windows and macOS. Previously, AI Workbench users had to install Docker Desktop manually. With this newest release of AI Workbench, developers who select Docker as their container runtime will have Docker Desktop installed on their machine automatically, with no manual steps required.

 You can learn about this integration in NVIDIA’s technical blog.

Moving beyond the AI application prototype

Docker Desktop is more than just a tool for application development; it’s a launchpad that provides an integrated, easy-to-use environment for developing a wide range of applications, including AI. What makes Docker Desktop particularly powerful is its ability to seamlessly create and manage containerized environments, ensuring that developers can focus on innovation without worrying about the underlying infrastructure.

For developers who have already invested in Docker, this means that the skills, automation, infrastructure, and tooling they’ve built up over the years for other workloads are directly applicable to AI workloads as well. This cross-compatibility offers a huge return on investment, as it allows teams to extend their existing Docker-based workflows to include AI applications and services without needing to overhaul their processes or learn new tools.

Docker Desktop’s compatibility with Windows, macOS, and Linux makes it an ideal choice for diverse development teams. Its robust features support a wide range of development workflows, from initial prototyping to large-scale deployment, ensuring that as AI applications move from concept to production, developers can leverage their existing Docker infrastructure and expertise to accelerate and scale their work.

For those looking to create high-quality, enterprise-grade AI applications, Docker Desktop with Docker Business offers advanced capabilities. These include enhanced security, management, and support features that are crucial for enterprise and advanced development environments. With Docker Business, development teams can build securely, collaborate efficiently, and maintain compliance, all while continuing to utilize their existing Docker ecosystem. By leveraging Docker Business, developers can confidently accelerate their workflows and deliver innovative AI solutions with the same reliability and efficiency they’ve come to expect from Docker.

Accelerating developer innovation with NVIDIA GPUs

In the rapidly evolving landscape of AI development, the ability to leverage GPU capabilities is crucial for handling the intensive computations required for tasks like model training and inference. Docker is working to offer flexible solutions to cater to different developers, whether you have your own GPUs or need to leverage cloud-based compute. 

Running containers with NVIDIA GPUs through Docker Desktop 

GPUs are at the heart of AI development, and Docker Desktop is optimized to leverage NVIDIA GPUs effectively. With Docker Desktop 4.29 or later, developers can configure CDI support in the daemon and easily make all NVIDIA GPUs available in a running container by using the --device option via support for CDI devices.

For instance, the following command can be used to make all NVIDIA GPUs available in a container:

docker run --device nvidia.com/gpu=all <image> <command>

For more information on how Docker Desktop supports NVIDIA GPUs, refer to our GPU documentation.

No GPUs? No problem with Testcontainers Cloud

Not all developers have local access to powerful GPU hardware. To bridge this gap, we’re exploring GPU support in Testcontainers Cloud. This will allow developers to access GPU resources in a cloud environment, enabling them to run their tests and validate AI models without needing physical GPUs. With Testcontainers Cloud, you will be able to harness the power of GPUs from anywhere, democratizing high-performance AI development.

Trusted AI/ML content on Docker Hub

Docker Desktop provides a reliable and efficient platform for developers to discover and experiment with new ideas and approaches in AI development. Through its trusted content program, Docker selects and curates with open source and commercial communities high-quality images and distributes them on Docker Hub, under Docker Official Images, Docker Sponsored Open Source, and Docker Verified Publishers. With a wealth of AI/ML content, Docker makes it easy for users to discover and pull images for quick experimentation. This includes various images, such as NVIDIA software offerings and many more, allowing developers to get started quickly and efficiently.

Accelerated builds with Docker Build Cloud

Docker Build Cloud is a fully managed service designed to streamline and accelerate the building, testing, and deployment of any application. By leveraging Docker Build Cloud, AI application developers can shift builds from local machines to remote BuildKit instances — resulting in up to 39x faster builds. By offloading the complex build process to Docker Build Cloud, AI development teams can focus on refining their models and algorithms while Docker handles the rest.

Docker Business users can experience faster, more efficient builds and reproducible AI deployments with Docker Build Cloud minutes as part of their subscription.

Ensuring quality with Testcontainers

As AI applications evolve from prototypes to production-ready solutions, ensuring their reliability and performance becomes critical. This is where testing frameworks like Testcontainers come into play. Testcontainers allows developers to test their applications using real containerized dependencies, making it easier to validate application logic that utilize AI models in self-contained, idempotent, reproducible ways. 

For instance, developers working with LLMs can create Testcontainers-based tests that will test their application by utilizing any model available on Hugging Face utilizing the recently released Ollama container.  

Wrap up

The collaboration between Docker and NVIDIA marks a significant step forward in the AI development landscape. By integrating Docker Desktop into NVIDIA AI Workbench, we are making it easier than ever for developers to build, ship, and run AI applications. Docker Desktop provides a robust, streamlined environment that supports a wide range of development workflows, from initial prototyping to large-scale deployment. 

With advanced capabilities from Docker Business, AI developers can focus on innovation and efficiency. As we deepen our partnership with NVIDIA, we look forward to bringing even more enhancements to the AI development community, empowering developers to push the boundaries of what’s possible in AI and machine learning. 

Stay tuned for more exciting updates as we work to revolutionize AI application development.

Learn more

“@docker can you help me…”: An Early Look at the Docker Extension for GitHub Copilot

21 mai 2024 à 15:35

At this point, every developer has probably heard about GitHub Copilot. Copilot has quickly become an indispensable tool for many developers, helping novice to seasoned developers become more productive by improving overall efficiency and expediting learning. 

Today, we are thrilled to announce that we are joining GitHub’s Partner Program and have shipped an experience as part of their limited public beta

At Docker, we want to make it easy for anyone to reap the benefits of containers without all the overhead of getting started. We aim to meet developers wherever they are, whether in their favorite editor, their terminal, Docker Desktop, and now, even on GitHub.

2400x1260 docker extension github copilot

What is the Docker Copilot extension?

In short, the Docker extension for GitHub Copilot (@docker) is an integration that extends GitHub Copilot’s technology to assist developers in working with Docker. 

What can I use @docker for? 

This initial scope for the Docker extension aims to take any developer end-to-end, from learning about containerization to validating and using generated Docker assets for inner loop workflows (Figure 1). Here’s a quick overview of what’s possible today:

  • Initiate a conversation with the Docker extension: In GitHub Copilot Chat, get in the extension context by using “@docker” at the beginning of your session.
  • Learn about containerization: Ask the Docker extension for GitHub Copilot to give you an overview of containerization with a question like,  “@docker, What does containerizing an application mean?”
  • Generate the correct Docker assets for your project: Get help containerizing your application and watch it generate the Dockerfiles, docker-compose.yml, and .dockerignore files tailored to your project’s languages and file structure: “@docker How would I use Docker to containerize this project?” 
  • Open a pull request with the assets to save you time: With your consent, the Docker extension can even ask if you want to open a PR with these generated Docker assets on GitHub, allowing you to review and merge them at your convenience.
  • Find project vulnerabilities with Docker Scout: The Docker extension also integrates with Docker Scout to surface a high-level summary of detected vulnerabilities and provide the next steps to continue using Scout in your terminal via CLI: “@docker can you help me find vulnerabilities in my project?

From there, you can quickly jump into an editor, like Codespaces, VS Code, or JetBrains IDEs, and start building your app using containers. The Docker Copilot extension currently supports Node, Python, and Java-based projects (single-language or multi-root/multi-language projects).

Alt text: Animated gif showing GitHub Copilot chatting about benefits of containerization.
Figure 1: Docker extension for GitHub Copilot in action.

How do I get access to @docker?

The Docker extension for GitHub Copilot is currently in a limited public beta and is accessible by invitation only. The Docker extension was developed through the GitHub Copilot Partner Program, which invites industry leaders to integrate their tools and services into GitHub Copilot to enrich the ecosystem and provide developers with even more powerful, context-aware tools to accelerate their projects. 

Developers invited to the limited public beta can install the Docker extension on the GitHub Marketplace as an application in their organization and invoke @docker from any context where GitHub Copilot is available (for example, on GitHub or in your favorite editor).

What’s coming to @docker?

During the limited public beta, we’ll be working on adding capabilities to help you get the most out of your Docker subscription. Look for deeper integrations that help you debug your running containers with Docker Debug, fix detected CVEs with Docker Scout, speed up your build with Docker Build Cloud, learn about Docker through our documentation, and more coming soon!

Help shape the future of @docker

We’re excited to continue expanding on @docker during the limited public beta. We would love to hear if you’re using the Docker extension in your organization or are interested in using it once it becomes publicly available. 

If you have a feature request or any issues, we invite you to file an issue on the Docker extension for GitHub Copilot tracker. Your feedback will help us shape the future of Docker tooling.

Thank you for your interest and support. We’re excited to see what you build with GitHub and @docker!

Learn more

Wasm vs. Docker: Performant, Secure, and Versatile Containers

9 mai 2024 à 18:39

Docker and WebAssembly (Wasm) represent two pivotal technologies that have reshaped the software development landscape. You’ve probably started to hear more about Wasm in the past few years as it has gained in popularity, and perhaps you’ve also heard about the benefits of using it in your application stack. This may have led you to think about the differences between Wasm and Docker, especially because the technologies work together so closely.

In this article, we’ll explore how these two technologies can work together to enable you to deliver consistent, efficient, and secure environments for deploying applications. By marrying these two tools, developers can easily reap the performance benefits of WebAssembly with containerized software development.

White text on blue background saying Wasm vs. Docker

What’s Wasm?

Wasm is a compact binary instruction format governed by the World Wide Web Consortium (W3C). It’s a portable compilation target for more than 40 programming languages, like C/C++, C#, JavaScript, Go, and Rust. In other words, Wasm is a bytecode format encoded to run on a stack-based virtual machine.

Similar to the way Java can be compiled to Java bytecode and executed on the Java Virtual Machine (JVM), which can then be compiled to run on various architectures, a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86.

a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86

What’s a Wasm runtime?

Wasm runtimes bridge the gap between portable bytecode and the underlying hardware architecture. They also provide APIs to communicate with the host environment and provide interoperability between other languages, such as JavaScript.

At a high level, a Wasm runtime runs your bytecode in three semantic phases:

  1. Decoding: Processing the module to convert it to an internal representation
  2. Validation: Checking to see that the decoded module is valid
  3. Execution: Installing and invoking a valid module

Wasm runtime examples include Spin, Wasmtime, WasmEdge, and Wasmer. Major browsers like Firefox and Chrome also use Spider Monkey and V8, respectively.

Why use Wasm?

To understand why you might want to use WebAssembly in your application stack, let’s examine its main benefits — notably, security without sacrificing performance and versatility.

Security without sacrificing performance

Wasm enables code to run at near-native speed within a secure, sandboxed environment, protecting systems from malicious software. This performance is achieved through just-in-time (JIT) compilation of WebAssembly bytecode directly into machine code, bypassing the need for transpiling into an intermediate format. 

Wasm also uses shared linear memory — a contiguous block of memory that simplifies data exchange between modules or between WebAssembly and JavaScript. This design allows for efficient communication and enables developers to blend the flexibility of JavaScript with the robust performance of WebAssembly in a single application.

The security of this system is further enhanced by the design of the host runtime environment, which acts as a sandbox. It restricts the Wasm module from accessing anything outside of the designated memory space and from performing potentially dangerous operations like file system access, network requests, and system calls. WebAssembly’s requirement for explicit imports and exports to access host functionality adds another layer of control, ensuring a secure execution environment.

Use case versatility

Finally, WebAssembly is relevant for more than traditional web platforms (contrary to its name). It’s also an excellent tool for server-side applications, edge computing, game development, and cloud/serverless computing. If performance, security, or target device resources are a concern, consider using this compact binary format.

During the past few years, WebAssembly has become more prevalent on the server side because of the WebAssembly System Interface (or WASI). WASI is a modular API for Wasm that provides access to operating system features like files, filesystems, and clocks. 

Docker vs. Wasm: How are they related?

After reading about WebAssembly code, you might be wondering how Docker is relevant. Doesn’t WebAssembly handle sandboxing and portability? How does Docker fit in the picture? Let’s discuss further.

Docker helps developers build, run, and share applications — including those that use Wasm. This is especially true because Wasm is a complementary technology to Linux containers. However, handling these containers without solid developer experience can quickly become a roadblock to application development.

That’s where Docker comes in with a smooth developer experience for building with Wasm and/or Linux containers.

Benefits of using Docker and Wasm together

Using Docker and Wasm together affords great developer experience benefits as well, including:

  • Consistent development environments: Developers can use Docker to containerize their Wasm runtime environments. This approach allows for a consistent Wasm development and execution environment that works the same way across any machine, from local development to production.
  • Efficient deployment: By packaging Wasm applications within Docker, developers can leverage efficient image management and distribution capabilities. This makes deploying and scaling these types of applications easier across various environments.
  • Security and isolation: Although Docker isolates applications at the operating system level, Wasm provides a sandboxed execution environment. When used together, the technologies offer a robust layered security model against many common vulnerabilities.
  • Enhanced performance: Developers can use Docker containers to deploy Wasm applications in serverless architectures or as microservices. This lets you take advantage of Wasm’s performance benefits in a scalable and manageable way.

How to enable Wasm on Docker Desktop

If you’re interested in running WebAssembly containers, you’re in luck! Support for Wasm workloads is now in beta, and you can enable it on Docker Desktop by checking Enable Wasm on the Features in development tab under Settings (Figure 2).

Note: Make sure you have containerd image store support enabled first.

Screenshot of Docker Desktop Settings showing checkmark beside "Enable Wasm" option.
Figure 2: Enable Wasm in Docker Desktop.

After enabling Wasm in Docker Desktop, you’re ready to go. Docker currently supports many Wasm runtimes, including Spin, WasmEdge, and Wasmtime. You can also find detailed documentation that explains how to run these applications.

How Docker supports WebAssembly

To explain how Docker supports WebAssembly, we’ll need to quickly review how the Docker Engine works.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

What’s neat about this design is that anyone can write a shim to integrate other runtimes with containerd, including WebAssembly runtimes. As a result, you can plug-and-play with various Wasm runtimes in Docker, like WasmEdge, Spin, and Wasmtime.

The future of WebAssembly and Docker

WebAssembly is continuously evolving, so you’ll need a tight pulse to keep up with ecosystem developments. One recent advancement relates to how the new WebAssembly Component model will impact shims for the various container runtimes. At Docker, we’re working to make it simple for developers to create Wasm containers and enhance the developer experience.

In a famous 2019 tweet thread, Docker founder Solomon Hykes described the future of cloud computing. In this future, he describes a world where Docker runs Windows, Linux, and WebAssembly containers side by side. Given all the recent developments in the ecosystem, that future is well and truly here.

Recent advancements include:

  • The launch of WASI Preview 2 fully rebased WASI on the component model type system and semantics: This makes it modular, fully virtualizable, and accessible to various source languages.
  • Fermyon, Microsoft, SUSE, LiquidReply, and others have also released the SpinKube open source project: The project provided a straightforward path for deploying Wasm-based serverless functions into Kubernetes clusters. Developers can use SpinKube with Docker via k3s (a lightweight wrapper to run Rancher Lab’s minimal Kubernetes distribution). Docker Desktop also includes the shim, which enables you to run Kubernetes containers on your local machine.

In 2024, we expect the combination of Wasm and containers to be highly regarded for its efficiency, scalability, and cost.

Wrapping things up

In this article, we explained how Docker and Wasm work together and how to use Docker for Wasm workloads. We’re excited to see Wasm’s adoption grow in the coming years and will continue to enhance our support to meet developers both where they’re at and where they’re headed. 

Check out the following related materials for details on Wasm and how it works with Docker:

Learn more

Thanks to Sohan Maheshwar, Developer Advocate Lead at Fermyon, for collaborating on this post.

containerd vs. Docker: Understanding Their Relationship and How They Work Together

27 mars 2024 à 13:41

During the past decade, containers have revolutionized software development by introducing higher levels of consistency and scalability. Now, developers can work without the challenges of dependency management, environment consistency, and collaborative workflows.

When developers explore containerization, they might learn about container internals, architecture, and how everything fits together. And, eventually, they may find themselves wondering about the differences between containerd and Docker and how they relate to one another.

In this blog post, we’ll explain what containerd is, how Docker and containerd work together, and how their combined strengths can improve developer experience.

containerd 2400x1260 1

What’s a container?

Before diving into what containerd is, I should briefly review what containers are. Simply put, containers are processes with added isolation and resource management. Containers have their own virtualized operating system with access to host system resources. 

Containers also use operating system kernel features. They use namespaces to provide isolation and cgroups to limit and monitor resources like CPU, memory, and network bandwidth. As you can imagine, container internals are complex, and not everyone has the time or energy to become an expert in the low-level bits. This is where container runtimes, like containerd, can help.

What’s containerd?

In short, containerd is a runtime built to run containers. This open source tool builds on top of operating system kernel features and improves container management with an abstraction layer, which manages namespaces, cgroups, union file systems, networking capabilities, and more. This way, developers don’t have to handle the complexities directly. 

In March 2017, Docker pulled its core container runtime into a standalone project called containerd and donated it to the Cloud Native Computing Foundation (CNCF).  By February 2019, containerd had reached the Graduated maturity level within the CNCF, representing its significant development, adoption, and community support. Today, developers recognize containerd as an industry-standard container runtime known for its scalability, performance, and stability.

Containerd is a high-level container runtime with many use cases. It’s perfect for handling container workloads across small-scale deployments, but it’s also well-suited for large, enterprise-level environments (including Kubernetes). 

A key component of containerd’s robustness is its default use of Open Container Initiative (OCI)-compliant runtimes. By using runtimes such as runc (a lower-level container runtime), containerd ensures standardization and interoperability in containerized environments. It also efficiently deals with core operations in the container life cycle, including creating, starting, and stopping containers.

How is containerd related to Docker?

But how is containerd related to Docker? To answer this, let’s take a high-level look at Docker’s architecture (Figure 1). 

Containerd facilitates operations on containers by directly interfacing with your operating system. The Docker Engine sits on top of containerd and provides additional functionality and developer experience enhancements.

containerd diagram v1

How Docker interacts with containerd

To better understand this interaction, let’s talk about what happens when you run the docker run command:

  • After you select enter, the Docker CLI will send the run command and any command-line arguments to the Docker daemon (dockerd) via REST API call.
  • dockerd will parse and validate the request, and then it will check that things like container images are available locally. If they’re not, it will pull the image from the specified registry.
  • Once the image is ready, dockerd will shift control to containerd to create the container from the image.
  • Next, containerd will set up the container environment. This process includes tasks such as setting up the container file system, networking interfaces, and other isolation features.
  • containerd will then delegate running the container to runc using a shim process. This will create and start the container.
  • Finally, once the container is running, containerd will monitor the container status and manage the lifecycle accordingly.

Docker and containerd: Better together 

Docker has played a key role in the creation and adoption of containerd, from its inception to its donation to the CNCF and beyond. This involvement helped standardize container runtimes and bolster the open source community’s involvement in containerd’s development. Docker continues to support the evolution of the open source container ecosystem by continuously maintaining and evolving containerd.

Containerd specializes in the core functionality of running containers. It’s a great choice for developers needing access to lower-level container internals and other advanced features. Docker builds on containerd to create a cohesive developer experience and comprehensive toolchain for building, running, testing, verifying, and sharing containers.

Build + Run

In development environments, tools like Docker Desktop, Docker CLI, and Docker Compose allow developers to easily define, build, and run single or multi-container environments and seamlessly integrate with your favorite editors or IDEs or even in your CI/CD pipeline. 

Test

One of the largest developer experience pain points involves testing and environment consistency. With Testcontainers, developers don’t have to worry about reproducibility across environments (for example, dev, staging, testing, and production). Testcontainers also allows developers to use containers for isolated dependency management, parallel testing, and simplified CI/CD integration.

Verify

By analyzing your container images and creating a software bill of materials (SBOM), Docker Scout works with Docker Desktop, Docker Hub, or Docker CLI to help organizations shift left. It also empowers developers to find and fix software vulnerabilities in container images, ensuring a secure software supply chain.

Share

Docker Registry serves as a store for developers to push container images to a shared repository securely. This functionality streamlines image sharing, making maintaining consistency and efficiency in development and deployment workflows easier. 

With Docker building on top of containerd, the software development lifecycle benefits from the inner loop and testing to secure deployment to production.

Wrapping up

In this article, we discussed the relationship between Docker and containerd. We showed how containers, as isolated processes, leverage operating system features to provide efficient and scalable development and deployment solutions. We also described what containerd is and explained how Docker leverages containerd in its stack. 

Docker builds upon containerd to enhance the developer experience, offering a comprehensive suite of tools for the entire development lifecycle across building, running, verifying, sharing, and testing containers. 

Start your next projects with containerd and other container components by checking out Docker’s open source projects and most popular open source tools

Learn more

❌
❌