Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

A Beginner’s Guide to Building Outdoor Light Shows Synchronized to Music with Open Source Tools

Par : Mike Coleman
2 décembre 2024 à 14:37

Outdoor light displays are a fun holiday tradition — from simple light strings hung from the eaves to elaborate scenes that bring out your competitive spirit. If using open source tools, thousands of feet of electrical cables, custom controllers, and your favorite music to build complex projects appeals to you, then the holiday season offers the perfect opportunity to indulge your creative passion. 

I personally run home light shows at Halloween and Christmas that feature up to 30,000 individually addressable LED lights synchronized with dozens of different songs. It’s been an interesting learning journey over the past five years, but it is also one that almost anyone can pursue, regardless of technical ability. Read on for tips on how to make a display that’s the highlight of your neighborhood. 

A blue cube covered in a green string of holiday lights that are red and yellow.

Getting started with outdoor light shows

As you might expect, light shows are built using a combination of hardware and software. The hardware includes the lights, props, controllers, and cabling. On the software side, there are different tools for the programming, also called sequencing, of the lights as well as the playback of the show. 

coleman holiday lights f1
Figure 1: Light show hardware includes the lights, props, controllers, and cabling.

Hardware requirements

Lights

Let’s look more closely at the hardware behind the scenes starting with the lights. Multiple types of lights can be used in displays, but I’ll keep it simple and focus on the most popular choice. Most shows are built around 12mm RGB LED lights that support the WS2811 protocol, often referred to as pixels or nodes. Generally, these are not available at retail stores. That means you’ll need to order them online, and I recommend choosing a vendor that specializes in light displays. I have purchased lights from a few different vendors, but recently I’ve been using Wally’s Lights, Visionary Light Shows, and Your Pixel Store.  

Props

The lights are mounted into different props — such as a spider for Halloween or a snowflake for the winter holidays. You can either purchase these props, which are usually made out of the same plastic cardboard material used in yard signs, or you can make them yourself. Very few vendors sell pre-built props, so be ready to push the pixels by hand — yes, in my display either I or someone in my family pushed each of the 30,000 lights into place when we initially built the props. I get most of my props from EFL Designs, Gilbert Engineering, or Boscoyo Studio

coleman holiday lights f2
Figure 2: The lights are mounted into different props, which you can purchase or make yourself.

Controllers

Once your props are ready to go, you’ll need something to drive them. This is where controllers come in (Figure 3). Like the props and lights, you can get your controllers from various specialized vendors and, to a large extent, you can mix and match different brands in the same show because they all speak the same protocols to control the pixels (usually E1.31 or DDP). 

You can purchase controllers that are ready to run, or you can buy the individual components and build your own boxes — I grew up building PCs, so I love this degree of flexibility. However, I do tend to buy pre-configured controllers, because I like having a warranty from the manufacturer. My controllers all come from HolidayCoro, but Falcon controllers are also popular.

coleman holiday lights f3
Figure 3: Once your props are ready to go, you’ll need a controller.

The number of controllers you need depends on the number of lights in your show. Most controllers have multiple outputs, and each output can drive a certain number of lights. I typically plan for about 400 lights per output. Plus, I use about three main controllers and four receiver boxes. Note that long-range receivers are a way of extending the distance you can place lights from the main controller, but this is more of an advanced topic and not one I’ll cover in this introductory article.

Cables

Although controllers are powered by standard household outlets, the connection from the controllers to the lights happens over specialized cabling. These extension cables contain three wires. Two are used to send power to the lights (either 5v or 12v), and a third is used to send data. Basically, this third wire sends instructions like “light 1,232 turn green for .5 seconds then fade to off over .25 seconds.” You can get these extension cables from any vendor that sells pixels. 

Additionally, all of the controllers need to be on the same Ethernet network. Many folks run their shows on wireless networks, but I prefer a wired setup for increased performance and reliability. 

Software and music

At this point, you have a bunch of props with lights connected to networked controllers via specialized cabling. But, how do you make them dance? That’s where the software comes in.

xLights

Many hobbyists use xLights to program their lights. This software is open source and available for Mac, Windows, and Linux, and it works with three basic primitives: props, effects, and time. You can choose what effect you want to apply to a given prop at a given time (Figure 4). The timing of the effect is almost always aligned with the song you’ve chosen. For example, you might flash snowflakes off and on in synchronization with the drum beat of a song. 

Screenshot of light sequencing software, showing an array of options for turning specific lights on and off.
Figure 4: Programming lights.

Music

If this step sounds overwhelming to you, you’re not alone. In fact, I don’t sequence my own songs. I purchase them from different vendors, who create sequences for generic setups with a wide variety of props. I then import them and map them to the different elements that I actually use in my show. In terms of time, professionals can spend many hours to animate one minute of a song. I generally spend about two hours mapping an existing sequence to my show’s layout. My favorite sequence vendors include BF Light Shows, xTreme Sequences, and Magical Light Shows

Falcon Player

Once you have a sequence built, you use another piece of software to send that sequence to your show controllers. Some controllers have this software built in, but most people I know use another open source application, Falcon Player (FPP), to perform this task. Not only can FPP be run on a Raspberry Pi, but it also is shipped as a Docker image! FPP includes the ability to play back your sequence as well as to build playlists and set up a show schedule for automated playback. 

Put it all together and flip the switch

When everything is put together, you should have a system similar to Figure 5:

Illustration of system setup, showing connection of elements, from xLights to FPP to Controllers to Lights.
Figure 5: System overview.

This example shows a light display in action. 

xLights community support

Although building your own light show may seem like a daunting task, fear not; you are not alone. I have yet to mention the most important part of this whole process: the community. The xLights community is one of the most helpful I’ve ever been part of. You can get questions answered via the official Facebook group as well through as other groups dedicated to specific sequence and controller vendors. Additionally, a Zoom support meeting runs 24×7 and is staffed by hobbyists from across the globe. So, what are you waiting for? Go ahead and start planning your first holiday light show!

Learn more

💾

John Legend's "What Christmas Means to Me" synchronized to over 32,000 lights. Original Sequence by @xTremeSequences

Why Testcontainers Cloud Is a Game-Changer Compared to Docker-in-Docker for Testing Scenarios

14 novembre 2024 à 15:39

Navigating the complex world of containerized testing environments can be challenging, especially when dealing with Docker-in-Docker (DinD). As a senior DevOps engineer and Docker Captain, I’ve seen firsthand the hurdles that teams face with DinD, and here I’ll share why Testcontainers Cloud is a transformative alternative that’s reshaping the way we handle container-based testing.

2400x1260 Testcontainers Cloud evergreen

Understanding Docker-in-Docker

Docker-in-Docker allows you to run Docker within a Docker container. It’s like Inception for containers — a Docker daemon running inside a Docker container, capable of building and running other containers.

How Docker-in-Docker works

  • Nested Docker daemons: In a typical Docker setup, the Docker daemon runs on the host machine, managing containers directly on the host’s operating system. With DinD, you start a Docker daemon inside a container. This inner Docker daemon operates independently, enabling the container to build and manage its own set of containers.
  • Privileged mode and access to host resources: To run Docker inside a Docker container, the container needs elevated privileges. This is achieved by running the container in privileged mode using the --privileged flag:
docker run --privileged -d docker:dind
  • The --privileged flag grants the container almost all the capabilities of the host machine, including access to device files and the ability to perform system administration tasks. Although this setup enables the inner Docker daemon to function, it poses significant security risks, as it can potentially allow the container to affect the host system adversely.
  • Filesystem considerations: The inner Docker daemon stores images and containers within the file system of the DinD container, typically under /var/lib/docker. Because Docker uses advanced file system features like copy-on-write layers, running an inner Docker daemon within a containerized file system (which may itself use such features) can lead to complex interactions and potential conflicts.
  • Cgroups and namespace isolation: Docker relies on Linux kernel features like cgroups and namespaces for resource isolation and management. When running Docker inside a container, these features must be correctly configured to allow nesting. This process can introduce additional complexity in ensuring that resource limits and isolation behave as expected.

Why teams use Docker-in-Docker

  • Isolated build environments: DinD allows each continuous integration (CI) job to run in a clean, isolated Docker environment, ensuring that builds and tests are not affected by residual state from previous jobs or other jobs running concurrently.
  • Consistency across environments: By encapsulating the Docker daemon within a container, teams can replicate the same Docker environment across different stages of the development pipeline, from local development to CI/CD systems.

Challenges with DinD

Although DinD provides certain benefits, it also introduces significant challenges, such as:

  • Security risks: Running containers in privileged mode can expose the host system to security vulnerabilities, as the container gains extensive access to host resources.
  • Stability issues: Nested containers can lead to storage driver conflicts and other instability issues, causing unpredictable build failures.
  • Complex debugging: Troubleshooting issues in a nested Docker environment can be complicated, as it involves multiple layers of abstraction and isolation.

Real-world challenges

Although Docker-in-Docker might sound appealing, it often introduces more problems than it solves. Before diving into those challenges, let’s briefly discuss Testcontainers and its role in modern testing practices.

What is Testcontainers?

Testcontainers is a popular open source library designed to support integration testing by providing lightweight, disposable instances of common databases, web browsers, or any service that can run in a Docker container. It allows developers to write tests that interact with real instances of external resources, rather than relying on mocks or stubs.

Key features of Testcontainers

  • Realistic testing environment: By using actual services in containers, tests are more reliable and closer to real-world scenarios.
  • Isolation: Each test session, or even each test can run in a clean environment, reducing flakiness due to shared state.
  • Easy cleanup: Containers are ephemeral and are automatically cleaned up after tests, preventing resource leaks.

Dependency on the Docker daemon

A core component of Testcontainers’ functionality lies in its interaction with the Docker daemon. Testcontainers orchestrates Docker resources by starting and stopping containers as needed for tests. This tight integration means that access to a Docker environment is essential wherever the tests are run.

The DinD challenge with Testcontainers in CI

When teams try to include Testcontainers-based integration testing in their CI/CD pipelines, they often face the challenge of providing Docker access within the CI environment. Because Testcontainers requires communication with the Docker daemon, many teams resort to using Docker-in-Docker to emulate a Docker environment inside the CI job.

However, this approach introduces significant challenges, especially when trying to scale Testcontainers usage across the organization.

Case study: The CI pipeline nightmare

We had a Jenkins CI pipeline that utilized Testcontainers for integration tests. To provide the necessary Docker environment, we implemented DinD. Initially, it seemed to work fine, but soon we encountered:

  • Unstable builds: Random failures due to storage driver conflicts and issues with nested container layers. The nested Docker environment sometimes clashed with the host, causing unpredictable behavior.
  • Security concerns: Running containers in privileged mode raised red flags during security audits. Because DinD requires privileged mode to function correctly, it posed significant security risks, potentially allowing containers to access the host system.
  • Performance bottlenecks: Builds were slow, and resource consumption was high. The overhead of running Docker within Docker led to longer feedback loops, hindering developer productivity.
  • Complex debugging: Troubleshooting nested containers became time-consuming. Logs and errors were difficult to trace through the multiple layers of containers, making issue resolution challenging.

We spent countless hours trying to patch these issues, but it felt like playing a game of whack-a-mole.

Why Testcontainers Cloud is a better choice

Testcontainers Cloud is a cloud-based service designed to simplify and enhance your container-based testing. By offloading container execution to the cloud, it provides a secure, scalable, and efficient environment for your integration tests.

How TestContainers Cloud addresses DinD’s shortcomings

Enhanced security

  • No more privileged mode: Eliminates the need for running containers in privileged mode, reducing the attack surface.
  • Isolation: Tests run in isolated cloud environments, minimizing risks to the host system.
  • Compliance-friendly: Easier to pass security audits without exposing the Docker socket or granting elevated permissions.

Improved performance

  • Scalability: Leverage cloud resources to run tests faster and handle higher loads.
  • Resource efficiency: Offloading execution frees up local and CI/CD resources.

Simplified configuration

  • Plug-and-play integration: Minimal changes are required to switch from local Docker to Testcontainers Cloud.
  • No nested complexity: Avoid the intricacies and pitfalls of nested Docker daemons.

Better observability and debugging

  • Detailed logs: Access comprehensive logs through the Testcontainers Cloud dashboard.
  • Real-time monitoring: Monitor containers and resources in real time with enhanced visibility.

Getting started with Testcontainers Cloud

Let’s dive into how you can get the most out of Testcontainers Cloud.

Switching to Testcontainers Cloud allows you to run tests without needing a local Docker daemon:

  • No local Docker required: Testcontainers Cloud handles container execution in the cloud.
  • Consistent environment: Ensures that your tests run in the same environment across different machines.

Additionally, you can easily integrate Testcontainers Cloud into your CI pipeline to run the same tests without scaling your CI infrastructure.

Using Testcontainers Cloud with GitHub Actions

Here’s how you can set up Testcontainers Cloud in your GitHub Actions workflow.

1. Create a new service account

  • Log in to Testcontainers Cloud dashboard.
  • Navigate to Service Accounts:
    • Create a new service account dedicated to your CI environment.
  • Generate an access token:
    • Copy the access token. Remember, you can only view it once, so store it securely.

2. Set the TC_CLOUD_TOKEN environment variable

  • In GitHub Actions:
    • Go to your repository’s Settings > Secrets and variables > Actions.
    • Add a new Repository Secret named TC_CLOUD_TOKEN and paste the access token.

3. Add Testcontainers Cloud to your workflow

Update your GitHub Actions workflow (.github/workflows/ci.yml) to include the Testcontainers Cloud setup.

Example workflow:

name: CI Pipeline

on:
  push:
    branches: [ main ]
  pull_request:
    branches: [ main ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      # ... other preparation steps (dependencies, compilation, etc.) ...

      - name: Set up Java
        uses: actions/setup-java@v3
        with:
          distribution: 'temurin'
          java-version: '17'

      - name: Setup Testcontainers Cloud Client
        uses: atomicjar/testcontainers-cloud-setup-action@v1
        with:
          token: ${{ secrets.TC_CLOUD_TOKEN }}

      # ... steps to execute your tests ...
      - name: Run Tests
        run: ./mvnw test

Notes:

  • The atomicjar/testcontainers-cloud-setup-action GitHub Action automates the installation and authentication of the Testcontainers Cloud Agent in your CI environment.
  • Ensure that your TC_CLOUD_TOKEN is kept secure using GitHub’s encrypted secrets.

Clarifying the components: Testcontainers Cloud Agent and Testcontainers Cloud

To make everything clear:

  • Testcontainers Cloud Agent (CLI in CI environments): In CI environments like GitHub Actions, you use the Testcontainers Cloud Agent (installed via the GitHub Action or command line) to connect your CI jobs to Testcontainers Cloud.
  • Testcontainers Cloud: The cloud service that runs your containers, offloading execution from your CI environment.

In CI environments:

  • Use the Testcontainers Cloud Agent (CLI) within your CI jobs.
  • Authenticate using the TC_CLOUD_TOKEN.
  • Tests executed in the CI environment will use Testcontainers Cloud.

Monitoring and debugging

Take advantage of the Testcontainers Cloud dashboard:

  • Session logs: View logs for individual test sessions.
  • Container details: Inspect container statuses and resource usage.
  • Debugging: Access container logs and output for troubleshooting.

Why developers prefer Testcontainers Cloud over DinD

Real-world impact

After integrating Testcontainers Cloud, our team observed the following:

  • Faster build times: Tests ran significantly faster due to optimized resource utilization.
  • Reduced maintenance: Less time spent on debugging and fixing CI pipeline issues.
  • Enhanced security: Eliminated the need for privileged mode, satisfying security audits.
  • Better observability: Improved logging and monitoring capabilities.

Addressing common concerns

Security and compliance

  • Data isolation: Each test runs in an isolated environment.
  • Encrypted communication: Secure data transmission.
  • Compliance: Meets industry-standard security practices.

Cost considerations

  • Efficiency gains: Time saved on maintenance offsets the cost.
  • Resource optimization: Reduces the need for expensive CI infrastructure.

Compatibility

  • Multi-language support: Works with Java, Node.js, Python, Go, .NET, and more.
  • Seamless integration: Minimal changes required to existing test code.

Conclusion

Switching to Testcontainers Cloud, with the help of the Testcontainers Cloud Agent, has been a game-changer for our team and many others in the industry. It addresses the key pain points associated with Docker-in-Docker and offers a secure, efficient, and developer-friendly alternative.

Key takeaways

  • Security: Eliminates the need for privileged containers and Docker socket exposure.
  • Performance: Accelerates test execution with scalable cloud resources.
  • Simplicity: Simplifies configuration and reduces maintenance overhead.
  • Observability: Enhances debugging with detailed logs and monitoring tools.

As someone who has navigated these challenges, I recommend trying Testcontainers Cloud. It’s time to move beyond the complexities of DinD and adopt a solution designed for modern development workflows.

Additional resources

Docker at Cloud Expo Asia: GenAI, Security, and New Innovations

Par : Yiwen Xu
22 octobre 2024 à 15:23

Cloud Expo Asia 2024 in Singapore drew thousands of cloud professionals and tech business leaders to explore and exchange the latest in cloud computing, security, GenAI, sustainability, DevOps, and more. At our Cloud Expo Asia booth, Docker showcased our latest innovations in AI integration, containerization, security best practices, and updated product offerings. Here are a few highlights from our experience at the event.

2400x1260 evergreen docker blog a

AI/ML and GenAI everywhere

AI/ML and GenAI were hot topics at Cloud Expo Asia. Docker CPO Giri Sreenivas’s talk on Transforming App Development: Docker’s Advanced Containerization and AI Integration highlighted that GenAI impacts software in two big ways — it accelerates product development and creates new types of products and experiences. He discussed how containers are an ideal tool for containerizing GenAI workflows in development, ensuring consistency across CI/CD pipelines and reproducibility across diverse platforms in production.

cloud expo asia 2024 f1
Docker Chief Product Officer Giri Sreenivas’s talk drew an overflow crowd.

Sreenivas highlighted the Docker extension for GitHub Copilot as an example of how Docker helps empower development teams to focus on innovation — closing the gap from the first line of code to production. Sreenivas also gave a sneak peek into upcoming products designed to streamline GenAI development to illustrate Docker’s commitment to evolving solutions to meet emerging needs. 

Adopting security best practices and shifting left

Developer efficiency and security were also popular themes at the event. When Sreenivas mentioned in his talk that security vulnerabilities that cost dollars to fix early in development would cost hundreds of dollars later in production, members of the audience nodded in agreement.

Docker CTO Justin Cormack gave a keynote address titled “The Docker Effect: Driving Developer Efficiency and Innovation in a Hybrid World.” He discussed how implementing best practices and investing in the inner loop are crucial for today’s development teams. 

One best practice, for example, is shifting left and identifying problems as quickly as possible in the software development lifecycle. This approach improves efficiency and reduces costs by detecting and addressing software issues earlier before they become expensive problems.

cloud expo asia 2024 f2
At Docker CTO Justin Cormack’s talk, attendees were eager to snap pictures of every slide.

Cormack also provided a few tips for meeting the security and control needs of modern enterprises with a layered approach. Start with key building blocks, he explained, such as trusted content, which provides dev teams with a good foundation to build securely from the start. 

A pyramid with the title Modern Enterprises Need a Layered Approach to Security and Control. The pyramid, from top down (or reverse order): Deliver a secure end product, Build on a secure platform, and Start with a secure Foundation.
Docker CTO Justin Cormack’s recommendations on meeting the security and control needs of modern enterprises.

At the Docker event booth, we demonstrated Docker Scout, which helps development teams identify, analyze, and remediate security vulnerabilities early in the dev process. Docker Business customers can take advantage of enterprise controls, letting admins, IT teams, and security teams continuously monitor and manage risk and compliance with confidence. 

cloud expo asia 2024 f4
After four hours of demos at the Docker booth, senior software engineer Chase Frankenfeld was still enthusiastically discussing Docker products, while our CEO Scott Johnston listened attentively to an attendee’s questions.

New Docker innovations and updated plan

From students to C-level executives who visited our booth, everyone was eager to learn more about containers and Docker. People lined up to see an end-to-end demo of how the suite of Docker products, such as Docker Desktop, Docker Hub, Docker Build Cloud, and Docker Scout, work together seamlessly to enable development teams to work more efficiently. 

Attendees also had the opportunity to learn more about Docker’s updated plans, which makes accessing the full suite of Docker products and solutions easy, with options for individual developers, small teams, and large enterprises.

cloud expo asia 2024 f5
Senior software engineer Maxime Clement explains Docker’s updated plans and demos Docker products to booth visitors.

Thanks, Cloud Expo Asia!

We enjoyed our conversations with event attendees and appreciate everyone who helped make this such a successful event. Thank you to the organizers, speakers, sponsors, and the community for a productive, information-packed experience.

cloud expo asia 2024 f6
What’s better than Docker swag? Docker swag in a claw machine.

From accelerating app development, supporting best practices of shifting left, meeting the security and control needs of modern enterprises, and innovating with GenAI, Docker wants to be your trusted partner to navigate the challenges in modern app development. 

Explore our Docker updated plans to learn how Docker can empower your teams, or contact our sales team to discover how we can help you innovate with confidence.

Learn more

Leveraging Testcontainers for Complex Integration Testing in Mattermost Plugins

8 octobre 2024 à 13:22

This post was contributed by Jesús Espino, Principal Engineer at Mattermost.

In the ever-evolving software development landscape, ensuring robust and reliable plugin integration is no small feat. For Mattermost, relying solely on mocks for plugin testing became a limitation, leading to brittle tests and overlooked integration issues. Enter Testcontainers, an open source tool that provides isolated Docker environments, making complex integration testing not only feasible but efficient. 

In this blog post, we dive into how Mattermost has embraced Testcontainers to overhaul its testing strategy, achieving greater automation, improved accuracy, and seamless plugin integration with minimal overhead.

2400x1260 leveraging testcontainers for complex integration testing in mattermost plugins

The previous approach

In the past, Mattermost relied heavily on mocks to test plugins. While this approach had its merits, it also had significant drawbacks. The tests were brittle, meaning they would often break when changes were made to the codebase. This made the tests challenging to develop and maintain, as developers had to constantly update the mocks to reflect the changes in the code.

Furthermore, the use of mocks meant that the integration aspect of testing was largely overlooked. The tests did not account for how the different components of the system interacted with each other, which could lead to unforeseen issues in the production environment. 

The previous approach additionally did not allow for proper integration testing in an automated way. The lack of automation made the testing process time-consuming and prone to human error. These challenges necessitated a shift in Mattermost’s testing strategy, leading to the adoption of Testcontainers for complex integration testing.

Mattermost’s approach to integration testing

Testcontainers for Go

Mattermost uses Testcontainers for Go to create an isolated testing environment for our plugins. This testing environment includes the Mattermost server, the PostgreSQL server, and, in certain cases, an API mock server. The plugin is then installed on the Mattermost server, and through regular API calls or end-to-end testing frameworks like Playwright, we perform the required testing.

We have created a specialized Testcontainers module for the Mattermost server. This module uses PostgreSQL as a dependency, ensuring that the testing environment closely mirrors the production environment. Our module allows the developer to install and configure any plugin you want in the Mattermost server easily.

To improve the system’s isolation, the Mattermost module includes a container for the server and a container for the PostgreSQL database, which are connected through an internal Docker network.

Additionally, the Mattermost module exposes utility functionality that allows direct access to the database, to the Mattermost API through the Go client, and some utility functions that enable admins to create users, channels, teams, and change the configuration, among other things. This functionality is invaluable for performing complex operations during testing, including API calls, users/teams/channel creation, configuration changes, or even SQL query execution. 

This approach provides a powerful set of tools with which to set up our tests and prepare everything for verifying the behavior that we expect. Combined with the disposable nature of the test container instances, this makes the system easy to understand while remaining isolated.

This comprehensive approach to testing ensures that all aspects of the Mattermost server and its plugins are thoroughly tested, thereby increasing their reliability and functionality. But, let’s see a code example of the usage.

We can start setting up our Mattermost environment with a plugin like this:

pluginConfig := map[string]any{}
options := []mmcontainer.MattermostCustomizeRequestOption{
  mmcontainer.WithPlugin("sample.tar.gz", "sample", pluginConfig),
}
mattermost, err := mmcontainer.RunContainer(context.Background(), options...)
defer mattermost.Terminate(context.Background()

Once your Mattermost instance is initialized, you can create a test like this:

func TestSample(t *testing.T) {
    client, err mattermost.GetClient()
    require.NoError(t, err)
    reqURL := client.URL + "/plugins/sample/sample-endpoint"
    resp, err := client.DoAPIRequest(context.Background(), http.MethodGet, reqURL, "", "")
    require.NoError(t, err, "cannot fetch url %s", reqURL)
    defer resp.Body.Close()
    bodyBytes, err := io.ReadAll(resp.Body)
    require.NoError(t, err)
    require.Equal(t, 200, resp.StatusCode)
    assert.Contains(t, string(bodyBytes), "sample-response") 
}

Here, you can decide when you tear down your Mattermost instance and recreate it. Once per test? Once per a set of tests? It is up to you and depends strictly on your needs and the nature of your tests.

Testcontainers for Node.js

In addition to using Testcontainers for Go, Mattermost leverages Testcontainers for Node.js to set up our testing environment. In case you’re unfamiliar, Testcontainers for Node.js is a Node.js library that provides similar functionality to Testcontainers for Go. Using Testcontainers for Node.js, we can set up our environment in the same way we did with Testcontainers for Go. This allows us to write Playwright tests using JavaScript and run them in the isolated Mattermost environment created by Testcontainers, enabling us to perform integration testing that interacts directly with the plugin user interface. The code is available on GitHub.  

This approach provides the same advantages as Testcontainers for Go, and it allows us to use a more interface-based testing tool — like Playwright in this case. Let me show a bit of code with the Node.js and Playwright implementation:

We start and stop the containers for each test:

test.beforeAll(async () => { mattermost = await RunContainer() })
test.afterAll(async () => { await mattermost.stop(); })

Then we can use our Mattermost instance like any other server running to run our Playwright tests:

test.describe('sample slash command', () => {
  test('try to run a sample slash command', async ({ page }) => {
    const url = mattermost.url()
    await login(page, url, "regularuser", "regularuser")
    await expect(page.getByLabel('town square public channel')).toBeVisible();
    await page.getByTestId('post_textbox').fill("/sample run")
    await page.getByTestId('SendMessageButton').click();
    await expect(page.getByText('Sample command result', { exact: true })).toBeVisible();
    await logout(page)
  });  
});

With these two approaches, we can create integration tests covering the API and the interface without having to mock or use any other synthetic environment. Also, we can test things in absolute isolation because we consciously decide whether we want to reuse the Testcontainers instances. We can also reach a high degree of isolation and thereby avoid the flakiness induced by contaminated environments when doing end-to-end testing.

Examples of usage

Currently, we are using this approach for two plugins.

1. Mattermost AI Copilot

This integration helps users in their daily tasks using AI large language models (LLMs), providing things like thread and meeting summarization and context-based interrogation.

This plugin has a rich interface, so we used the Testcontainers for Node and Playwright approach to ensure we could properly test the system through the interface. Also, this plugin needs to call the AI LLM through an API. To avoid that resource-heavy task, we use an API mock, another container that simulates any API.

This approach gives us confidence in the server-side code but in the interface side as well, because we can ensure that we aren’t breaking anything during the development.

2. Mattermost MS Teams plugin

This integration is designed to connect MS Teams and Mattermost in a seamless way, synchronizing messages between both platforms.

For this plugin, we mainly need to do API calls, so we used Testcontainers for Go and directly hit the API using a client written in Go. In this case, again, our plugin depends on a third-party service: the Microsoft Graph API from Microsoft. For that, we also use an API mock, enabling us to test the whole plugin without depending on the third-party service.

We still have some integration tests with the real Teams API using the same Testcontainers infrastructure to ensure that we are properly handling the Microsoft Graph calls.

Benefits of using Testcontainers libraries

Using Testcontainers for integration testing offers benefits, such as:

  • Isolation: Each test runs in its own Docker container, which means that tests are completely isolated from each other. This approach prevents tests from interfering with one another and ensures that each test starts with a clean slate.
  • Repeatability: Because the testing environment is set up automatically, the tests are highly repeatable. This means that developers can run the tests multiple times and get the same results, which increases the reliability of the tests.
  • Ease of use: Testcontainers is easy to use, as it handles all the complexities of setting up and tearing down Docker containers. This allows developers to focus on writing tests rather than managing the testing environment.

Testing made easy with Testcontainers

Mattermost’s use of Testcontainers libraries for complex integration testing in their plugins is a testament to the power and versatility of Testcontainers.

By creating a well-isolated and repeatable testing environment, Mattermost ensures that our plugins are thoroughly tested and highly reliable.

Learn more

Why We Need More Gender Diversity in the Cybersecurity Space

6 septembre 2024 à 14:21

What does it mean to be diverse? At the root of diversity is the ability to bring people together with different perspectives, experiences, and ideas. It’s about enriching the work environment to lead to more innovative solutions, better decision-making, and a more inclusive environment.

For me, it’s about ensuring that my daughter one day knows that it really is okay for her to be whatever she wants to be in life. That she isn’t bound by a gender stereotype or what is deemed appropriate based on her sex.  

This is why building a more diverse workforce in technology is so critical. I want the children of the world, my children, to be able to see themselves in the people they admire, in the fields they are interested in, and to know that the world is accepting of the path that they choose.

Monday, August 26th, was Women’s Equality Day, and while I recognize that women have come a long way, there is still work to be done. Diversity is not just a buzzword — it’s a necessity. When diverse perspectives converge, they create a rich ground for innovation. 

Blue image with white checkmark in shield

Women in cybersecurity

Despite progress in many areas, women are still underrepresented in cybersecurity. Let’s look at key statistics. According to data published in the ISC2 Cybersecurity Workforce Study published in 2023:

  • Women make up 26% of the cybersecurity workforce globally. 
  • The average global salary of women who participated in the ISC2 survey was US$109,609 compared to $115,003 for men. For US women, the average salary was $141,066 compared to $148,035 for men. 

Making progress

We should recognize where we have had wins in cybersecurity diversity, too.

The 2024 Cybersecurity Skills Gap global research report highlights significant progress in improving diversity within the cybersecurity industry. According to the report, 83% of companies have set diversity hiring goals for the next few years, with a particular focus on increasing the representation of women and minority groups. Additionally, structured programs targeting women have remained a priority, with 73% of IT decision-makers implementing initiatives specifically aimed at recruiting more women into cybersecurity roles. These efforts suggest a growing commitment to enhancing diversity and inclusion within the field, which is essential for addressing the global cybersecurity skills shortage.

Women hold approximately 25% of the cybersecurity jobs globally, and that number is growing. This representation has seen a steady increase from about 10% in 2013 to 20% in 2019, and it’s projected to reach 30% by 2025, reflecting ongoing efforts to enhance gender diversity in this field. 

Big tech companies are playing a pivotal role in increasing the number of women in cybersecurity by launching large-scale initiatives aimed at closing the gender gap. Microsoft, for instance, has committed to placing 250,000 people into cybersecurity roles by 2025, with a specific focus on underrepresented groups, including women. Similarly, Google and IBM are investing billions into cybersecurity training programs that target women and other underrepresented groups, aiming to equip them with the necessary skills to succeed in the industry.

This progress is crucial as diverse teams are often better equipped to tackle complex cybersecurity challenges, bringing a broader range of perspectives and innovative solutions to the table. As organizations continue to emphasize diversity in hiring, the cybersecurity industry is likely to see improvements not only in workforce composition but also in the overall effectiveness of cybersecurity strategies.

Good for business

This imbalance is not just a social issue — it’s a business one. There are not enough cybersecurity professionals to join the workflow, resulting in a shortage. As of the ISC2’s 2022 report, there is a worldwide gap of 3.4 million cybersecurity professionals. In fact, most organizations feel at risk because they do not have enough cybersecurity staffing. 

Cybersecurity roles are also among the fastest growing roles in the United States. The Cybersecurity and Infrastructure Security Agency (CISA) introduced the Diverse Cybersecurity Workforce Act of 2024 to promote the cybersecurity field to underrepresented and disadvantaged communities. 

Here are a few ideas for how we can help accelerate gender diversity in cybersecurity:

  1. Mentorship and sponsorship: Experienced professionals should actively mentor and sponsor women in these fields, helping them navigate the challenges and seize opportunities.

    Unfortunately, this year the cybersecurity industry has seen major losses in organizations that support women. Women Who Code (WWC) and Girls in Tech shut their doors due to shortages in funds. Other organizations are still available, including:

Companies may also consider internal mentorship programs or working with partners to allow cross-company mentorship opportunities.

Women within the cybersecurity field should also consider guest lecture positions or even teaching. Young girls who do not get to see women in the field are statistically less likely to choose that as a profession.

  1. Inclusive work environments: Companies must create cultures where diversity is celebrated, not just tolerated or a means to an end. This means fostering environments where women feel empowered to share their ideas and take risks. This could include:
  • Provide training to employees at all levels. At Docker, every employee receives an annual training budget. Additionally, our Employee Resource Groups (ERGs) are provided with budgets to facilitate educational initiatives to support under-represented groups. Teams also can add additional training as part of the annual budgeting process.
  • Ensure there is an established career ladder for cybersecurity roles within the organization. Work with team members to understand their wishes for career advancement and create internal development plans to support those achievements. Make sure results are measurable. 
  • Provide transparency around promotions and pay, reducing the gender gaps in these areas. 
  • Ensure recruiters and managers are trained on diversity and identifying diverse candidate pools. At Docker, we invest in sourcing diverse candidates and ensuring our interview panels have a diverse team so candidates can learn about different perspectives regarding life at Docker.
  • Ensure diverse recruitment panels. This is important for recruiting new diverse talent and allows people to understand the culture from multiple perspectives.
  1. Policy changes: Companies should implement policies that support work-life balance, such as flexible working hours and parental leave, making it easier for women to thrive in these demanding fields. Companies could consider the following programs:
  • Generous paid parental leave.
  • Ramp-back programs for parents returning from parental leave.
  • Flexible working hours, remote working options, condensed workdays, etc. 
  • Manager training to ensure managers are being inclusive and can navigate diverse direct report needs.
  1. Employee Resource Groups (ERGs): Establishing allyship groups and/or employee resource groups (ERGs) help ensure that employees feel supported and have a mechanism to report needs to the organization. For example, a Caregivers ERG can help advocate for women who need flexibility in their schedule to allow for caregiving responsibilities. 

Better together

As we reflect on the progress made in gender diversity, especially in the cybersecurity industry, it’s clear that while we’ve come a long way, there is still much more to achieve. The underrepresentation of women in cybersecurity is not just a diversity issue — it’s a business imperative. Diverse teams bring unique perspectives that drive innovation, foster creativity, and enhance problem-solving capabilities. The ongoing efforts by companies, coupled with supportive policies and inclusive cultures, are critical steps toward closing the gender gap.

The cybersecurity landscape is evolving, and so must our approach to diversity. It’s encouraging to see big tech companies and organizations making strides in this direction, but the journey is far from over. As we commemorate Women’s Equality Day, let’s commit to not just acknowledging the need for diversity but actively working toward it. The future of cybersecurity — and the future of technology — depends on our ability to embrace and empower diverse voices.

Let’s make this a reality, not just for the sake of our daughters but for our entire industry.

Learn more

Join Docker CEO Scott Johnston at SwampUP 2024 in Austin

Par : Jason Dunne
6 septembre 2024 à 13:00

We are excited to announce Docker’s participation in JFrog’s flagship event, SwampUP 2024, which will take place September 9 – 11, in Austin, Texas. In his SwampUP keynote talk, Docker CEO Scott Johnston will discuss how the Docker and JFrog collaboration boosts secure software and AI application development.

2400x1260 Jfrog Swampup

Keynote highlights

Johnston will discuss Docker’s approach to managing secure software supply chains by providing developer teams with trusted content, reducing and limiting exposure to malicious content in the early development stages. He will explore how Docker Desktop, Docker Hub, and Docker Scout play critical roles in ensuring that the building blocks developers rely on are deployed securely. By bringing security to the root of the software development lifecycle, highlighting vulnerabilities, and bringing trusted container images to the inner loop, Docker empowers development teams to safeguard their process, ensuring the delivery of higher quality, more secure applications, faster. 

Attendees will get insights into how Docker innovations, including Docker Business capabilities and Docker Hub benefits, are transforming software development. Johnston will walk through the practical benefits of integrating Docker’s products within JFrog’s ecosystem, showcasing real-world examples of how companies use these combined tools to streamline their development pipelines and accelerate delivering applications, many of which are powered by ML and AI. This combination enables a more comprehensive approach to managing software supply chains, ensuring that security is embedded throughout the development lifecycle.

Better together 

Docker and JFrog’s partnership is more than just a collaboration: It’s a commitment to providing developers with the tools and resources they need to build secure, efficient, and scalable applications. This connection between Docker’s expertise in container-first software development and JFrog’s comprehensive DevOps platform empowers development teams to manage their software supply chains with precision. By bringing together Docker’s trusted content and JFrog’s robust artifact management, developers can ensure their applications are built on a foundation of security and reliability.

Our mutual customers with Docker Business subscriptions can leverage features like Registry Access Management and Image Access Management to ensure developers only access verified registries and image repositories, such as specific instances of JFrog Artifactory or JFrog Container Registry.

Looking ahead, Docker and JFrog are committed to continuing their joint efforts in advancing secure software supply chain practices. Upcoming initiatives include expanding the availability of trusted content, enabling deeper integrations between Docker Scout and JFrog’s products, and introducing new features that will further enhance developer productivity and security. These developments will help organizations navigate the complexities of modern software development with greater confidence and control.

See you in Austin

As we prepare for SwampUP, we invite you to explore the integrations between Docker and JFrog that are already transforming development workflows. Whether you’re looking to manage your on-premise images with JFrog Artifactory or leverage Docker’s advanced security analytics and automated image management capabilities, this partnership offers resources to help developers successfully deploy cloud-native and hybrid applications with containerization best practices at their core.

Catch Scott Johnston’s keynote at SwampUP and learn more about how our partnership with JFrog can elevate your development processes. We’re excited to work together to build a more secure, efficient, and innovative software development ecosystem. See you in Austin!

Learn more

Thank You to the Stack Overflow Community for Ranking Docker the Most Used, Desired, and Admired Developer Tool 

7 août 2024 à 13:30

As you might have seen, Stack Overflow recently unveiled the 2024 Developer Survey results. This always serves for me as a time to reflect on what Docker has accomplished each year in between. Since our inclusion in the survey five years ago, the Stack Overflow community has consistently ranked Docker highly. We were humbled to see Docker recognized as the most-used and most-desired developer tool for the second consecutive year. In addition, this year the community has elevated Docker to be the most-admired (78%). Moreover, Docker is the most-used tool (in the “other tools” category) by professional developers, with 59% using it in their work. This is the direct result of the value developers get by using Docker: a great developer experience, a step-function improvement in productivity, the industry’s largest repository of trusted content, and a community to support getting things done. 

Your votes and support mean the world to us, and we couldn’t have achieved this without the Docker and Stack Overflow developer communities! Your feedback and enthusiasm drive us to keep improving and innovating.

2400x1260 stack overflow most loved

When Stack Overflow released the results of last year’s 2023 Developer Survey and we learned that Stack Overflow’s community ranked Docker as the #1 most-desired and #1 most-used developer tool, I said that it means we can’t slow down and need to go even faster in our effort to serve developers. Since the 2023 survey, we have continued listening to your needs and have delivered many improvements in speed, security, collaboration, content, and functionality.  

The 2024 survey results highlight a few key themes that resonate deeply with Docker’s mission and feedback we’re hearing directly from our community: Developers want tools that enhance productivity, simplify workflows, and help them with the latest technological advancements — and yes, that includes AI. 

Let’s look at a few key innovations and updates from the past year that reflect how we’re addressing your feedback and the evolving landscape. 

What’s new

We released Docker Scout for actionable insights in the software supply chain, helping developers address security and policy issues at the time of writing code rather than wait for CI results or, much worse, discover issues when an app is in production. We also provide a free Docker Scout Team subscription to all Docker-Sponsored Open Source (DSOS) participants to help ensure more of the content on Docker Hub is secure from the start. Then, we added Docker Scout Health Scores for security grading containers in your Docker Hub repos. We announced Docker Build Cloud to speed up build times. We also welcomed AtomicJar, maker of Testcontainers, to the Docker family.

Docker continues to innovate in bringing the power of the cloud to local development. Specifically, through Docker Desktop developers easily benefit from Docker’s cloud services in their inner loops, including Docker Build Cloud, Docker Scout, Testcontainers Cloud, and Docker Hub. The result? More frequent releases of higher quality, more secure applications. 

Speaking of Docker Desktop, we’ve delivered more than a dozen Docker Desktop releases in the past year, each one providing more capabilities to boost developer productivity, including Docker Debug, Docker Build checks, Docker Init, Builds view, private marketplace for Docker Extensions, Compose Watch, Resource Saver mode, and much more.

And we’ve also shipped Betas of many new capabilities, including GitHub Actions builds, Compose File Viewer, a new terminal feature in Docker Desktop, enterprise-grade Volume Backup to cloud providers, Docker Desktop for Windows on Arm, Docker Desktop support for Red Hat Enterprise Linux, and others.

While rapidly rolling out new features and products, we remain focused on security. In addition to unveiling Docker Scout, our tool designed to enhance the security of the software supply chain, we were happy to announce that we have received our SOC 2 Type 2 attestation and ISO 27001 certification with no exceptions or major non-conformities. 

This past year has been a busy one for open source, and Docker remains committed to actively maintaining projects that are core to the container ecosystem, including Compose, BuildKit, runc, containerd, Moby (Docker Engine), Distribution, and more. As but one example, BuildKit now includes experimental support for Windows containers, expanding its versatility and reach. By investing in these open source projects, Docker and our community together ensure the container ecosystem continues to evolve to better serve developers.

AI/ML advancements 

We know from our community and customers that Docker is already a pivotal part of the AI/ML development ecosystem, and its use in AI/ML is only growing. For example, a year ago there were more than 100 million pulls of AI/ML images in Docker Hub. Since then that number has grown to more than 500 million!

In the past year we’ve also leaned into leveraging AI to help developers innovate faster and smarter. For example, our integration with tools like GitHub Copilot supports rapid onboarding and continuous learning for developers. Additionally, we’ve added an AI-powered assistant to Docker documentation. By leveraging AI-driven assistance, developers can enhance their coding skills, stay updated with the latest trends, and contribute more effectively to their organizations. 

Looking further out, we see AI/ML fundamentally changing how developers work and how applications are built. To explore these quickly evolving spaces together with our community, we are experimenting in public with new techniques and tools in our Docker Labs GenAI series. For example, a recent post explores how to create Dockerfiles with GenAI

Stay tuned — we have even more AI ideas percolating!

Guides and manuals

Speaking of documentation, our Docs and DevRel teams, with help from our Docker Captains, have been up-leveling our guides and manuals. Whether you’re brand new to the Docker community or have been with us from the beginning, you’ll find guides that can take you from starting with Docker foundational concepts to language-specific, use-case, and deep-dive tutorials. Do you have ideas to contribute? We want to hear from you!  

Thank you, and stay in touch

Stack Overflow’s 2024 Developer Survey highlights the critical role Docker plays in the developer ecosystem. By continually innovating and addressing the needs of our community and customers, we help developers and businesses achieve their goals. As we look to the future, Docker remains dedicated to empowering every developer and team with the best solutions to navigate and thrive in the ever-evolving software development landscape.

On behalf of everyone here at Team Docker: Thank you for your ongoing support and trust in Docker!

Learn more

3 Ways CARIAD Configures Docker Business for Security and Compliance

Par : Briana Swift
25 juillet 2024 à 13:46

CARIAD, an automotive software and technology company, unites more than 6,000 global experts and aligns major brands in the Volkswagen Group under one software strategy. Founded in 2020, CARIAD provides solutions to securely and compliantly update the fleet from mere transport to fully integrated digital experiences. CARIAD’s use of Docker provides a framework for embedding advanced software into existing systems.

As a subsidiary of Volkswagen Group, CARIAD has expertise in complex identity access requirements, including integrating Docker with multiple Active Directory instances. Security and compliance requirements are critical, with added layers of complexity due to environment requirements introduced when developing embedded systems.

Docker Business is a specialized containerization platform for large enterprises, providing features that enhance security, compliance, and scalability. CARIAD leverages Docker Business to integrate Single Sign-On (SSO) and Image Access Management (IAM), which are crucial for meeting their stringent security requirements. These features allow CARIAD to control access to Docker resources effectively, supporting their security and compliance requirements.

Docker and CARIAD logos on wavy blue and green background

Integration with WSL 2 

Docker Desktop makes it simple for CARIAD developers to run Linux containers natively on their Windows machines without the need for a dual-boot setup or a dedicated Linux machine.

Windows Subsystem for Linux 2 (WSL 2) provides a hybrid development environment, with a Linux kernel running in a lightweight virtual machine, fully managed by Windows, yet offering near-native performance. 

Before WSL 2, the original WSL used a translation layer between Windows and the Linux file system, which introduced potential performance bottlenecks, especially for running build scripts or version control operations. WSL 2 introduces a full Linux kernel with a real Linux file system, stored in a virtual disk image. This greatly improves file IO performance and supports a broader range of tools and applications with better Linux system call support.

WSL 2 also improves resource management by dynamically managing memory and CPU resources allocated to the Linux subsystem. This functionality is crucial for CARIAD because it allows efficient scaling of resources based on workload demands, which is particularly important when developing and testing resource-intensive applications.

Docker Desktop integrates well with WSL 2 and provides the capability to execute Docker commands with any Linux distribution installed within WSL 2. This approach enables CARIAD to execute Docker commands within a custom WSL distribution that adheres to their organizational policy requirements.

Single Sign-On and User Access Management

CARIAD integrates Docker SSO, available in Docker Business, with its existing Azure Active Directory instances to ensure that only authenticated and authorized users access Docker resources, aligning with required policies. Enhancing the benefits of Enterprise SSO, this feature is crucial for proper configuration and enforcement of other security measures, like Image Access Management (IAM).  

Image Access Management 

CARIAD ensures it uses only authorized images from Docker Hub, enforced through tailored administrative configurations with IAM. This approach manages access levels by group and is a key component in enforcing security protocols, particularly in safeguarding container environments. Properly configured and enforced IAM, which is automatically enabled by enforcing sign-in, reduces the risk associated with unauthorized or unsecured images.

This process involves activating IAM, setting permissions that align with user roles and project requirements, and testing to ensure the permissions are working as intended (Figure 1).

The CARIAD team explains the importance of RAM and IAM when using WSL 2 this way: “While WSL 2 seamlessly grants elevated root capabilities within its environment, it is fortunate that these permissions do not extend to SYSTEM rights on the Windows host. However, if both registry and image access management are absent by the Docker Desktop setup, the lack of firewall and anti-malware protection could introduce a potential malicious container attack and a local privilege escalation.” 

Illustration of process by which a malicious container could be exploited without Image Access Management.
Figure 1: Potential introduction of a malicious container.

Conclusion

CARIAD’s strategies for deploying Docker Business into a secure enterprise environment represent strong choices for any organization managing similar security, compliance, or identity access management requirements. For organizations looking to enhance their development operations, CARIAD’s model offers a blueprint for deploying Docker Desktop to large enterprises.

Using Docker Business features and WSL 2, CARIAD ensures compliance and supports a developer-friendly workflow. Within the stringent requirements necessary for automotive systems, developers at Volkswagen Group work with best-in-class tools and processes to build securely and quickly. CARIAD’s approach provides valuable lessons for enterprises looking to improve their development operations with Docker.

Read more from CARIAD in their case study — Building a Secure and Compliant Framework with Docker at CARIAD — and white paper — Using Docker Desktop in Large-Scale Enterprises — and get inspiration for secure, compliant Docker implementations in the automotive industry.

Learn more

10 Years Since Kubernetes Launched at DockerCon

10 juin 2024 à 17:07

It is not often you can reflect back and pinpoint a moment where an entire industry changed, less often to pinpoint that moment and know you were there to see it first hand.

On June 10th, 2014, day 2 of the first ever DockerCon, 16:04 seconds into his keynote speech, Google VP of Infrastructure Eric Brewer announced that Google was releasing the open source solution they built for orchestrating containers: Kubernetes. This was one of those moments. The announcement of Kubernetes began a tectonic shift in how the internet runs at scale, so many of the most important applications in the world today would not be possible without Docker and Kubernetes.

2400x1260 kubernetes 10th anniversary

You can watch the announcement on YouTube.

We didn’t know how much Kubernetes would change things at that time. In fact, in those two days, Apache Mesos, Red Hat’s GearD, Docker Libswarm, and Facebook’s Tupperware were all also launched. This triggered what later became known by some as “the Container Orchestration War.” Fast forward three years and the community had consolidated on Kubernetes for the orchestration layer and Docker (powered by containerd) for the container format, distribution protocol, and runtime. In 2017,  Docker integrated Kubernetes in its desktop and server products, and this helped cement Kubernetes leadership.

Why was it so impactful? Kubernetes landed at just the right time and solved just the right problems. The number of containers and server nodes in production was increasing exponentially every day. The role of DevOps put a lot of burden on the engineer. They needed solutions that could help manage applications at unprecedented scale. Containers and their orchestration engines were, and continue to be, the lifeblood of modern application deployments because they are the only real way to solve this need.

We, the Docker team and community, consider ourselves incredibly fortunate to have played a role in this history. To look back and say we had a part in what has been built from that one moment is humbling.

… and the potential of what is yet to come is beyond exciting! Especially knowing that our impact continues today as a keystone to modern application development. Docker enables app development teams to rapidly deliver applications, secure their software supply chains, and do so without compromising the visibility and controls required by the business.

Happy 10th birthday Kubernetes! Congratulations to all who were and continue to be involved in creating this tremendous gift to the software industry.

Learn more

Empowering Developers at Microsoft Build: Docker Unveils Integrations and Sessions

15 mai 2024 à 18:25

We are thrilled to announce Docker’s participation at Microsoft Build, which will be held May 21-23 in Seattle, Washington, and online. We’ll showcase how our deep collaboration with Microsoft is revolutionizing the developer experience. Join us to discover the newest and upcoming solutions that enhance productivity, secure applications, and accelerate the development of AI-driven applications.

Our presence at Microsoft Build is more than just a showcase — it’s a portal to the future of application development. Visit our booth to interact with Docker experts, experience live demos, and explore the powerful capabilities of Docker Desktop and other Docker products. Whether you’re new to Docker or looking to deepen your expertise, our team is ready to help you unlock new opportunities in your development projects.

2400x1260 ms build 2024

Sessions featuring Docker

  • Optimizing the Microsoft Developer Experience with Docker: Dive into our partnership with Microsoft and learn how to leverage Docker in Azure, Windows, and Dev Box environments to streamline your development processes. This session is your key to mastering the inner loop of development with efficiency and innovation.
  • Shifting Test Left with Docker and Microsoft: Learn how to address app quality challenges before the continuous integration stage using Tescontainers Cloud and Docker Debug. Discover how these tools aid in rapid and effective debugging, enabling you to streamline the debugging process for both active and halted containers and create testing efficiencies at scale.
  • Securing Dockerized Apps in the Microsoft Ecosystem: Learn about Docker’s integrated tools for securing your software supply chain in Microsoft environments. This session is essential for developers aiming to enhance security and compliance while maintaining agility and innovation.
  • Innovating the SDLC with Insights from Docker CTO Justin Cormack: In this interview, Docker’s CTO will share insights on advancing the SDLC through Docker’s innovative toolsets and partnerships. Watch Thursday 1:45pm PT from the Microsoft Build stage or our Featured Partner page
  • Introducing the Next Generation of Windows on ARM: Experience a special session featuring Docker CTO Justin Cormack as he discusses Docker’s role in expanding the Windows on ARM64 ecosystem, alongside a Microsoft executive.

Where to find us

You can also visit us at Docker booth #FP29 to get hands-on experience and view demos of some of our newest solutions.

If you cannot attend in person, the MSBuild online experience is free. Explore our Microsoft Featured Partner page.

We hope you’ll be able to join us at Microsoft Build — in person or online — to explore how Docker and Microsoft are revolutionizing application development with innovative, secure, and AI-enhanced solutions. Whether you attend in person or watch the sessions on-demand, you’ll gain essential insights and skills to enhance your projects. Don’t miss this chance to be at the forefront of technology. We are eager to help you navigate the exciting future of AI-driven applications and look forward to exploring new horizons of technology together.

Learn more

Empower Your Development: Dive into Docker’s Comprehensive Learning Ecosystem

2 avril 2024 à 13:26

Continuous learning is a necessity for developers in today’s fast-paced development landscape. Docker recognizes the importance of keeping developers at the forefront of innovation, and to do so, we aim to empower the developer community with comprehensive learning resources.

Docker has taken a multifaceted approach to developer education by forging partnerships with renowned platforms like Udemy and LinkedIn Learning, investing in our own documentation and guides, and highlighting the incredible learning content created by the developer community, including Docker Captains.

2400x1260 docker developer learning across platforms

Commitment to developer learning

At Docker, our goal is to simplify the lives of developers, which begins with empowering devs with understanding how to maximize the power of Docker tools throughout their projects. We also recognize that developers have different learning styles, so we are taking a diversified approach to delivering this material across an array of platforms and formats, which means developers can learn in the format that best suits them. 

Strategic partnerships for developer learning

Recognizing the diverse learning needs of developers, Docker has partnered with leading online learning platforms — Udemy and LinkedIn Learning. These partnerships offer developers access to a wide range of courses tailored to different expertise levels, from beginners looking to get started with Docker to advanced users aiming to deepen their knowledge. 

For teams already utilizing these platforms for other learning needs, this collaboration places Docker learning in a familiar platform next to other coursework.

  • Udemy: Docker’s collaboration with Udemy highlights an array of Endorsed Docker courses, designed by industry experts. Whether getting a handle on containerization or mastering Docker with Kubernetes, Udemy’s platform offers the flexibility and depth developers need to upskill at their own pace. Today, demand remains high for Docker content across the Udemy platform, with more than 350 courses offered and nearly three million enrollments to date.
  • LinkedIn Learning: Through LinkedIn Learning, developers can dive into curated Docker courses to earn a Docker Foundations Professional Certificate once they complete the program. These resources are not just about technical skills; they also cover best practices and practical applications, ensuring learners are job-ready.

Leveraging Docker’s documentation and guides

Although third-party platforms provide comprehensive learning paths, Docker’s own documentation and guides are indispensable tools for developers. Our documentation is continuously updated to serve as both a learning resource and a reference. From installation and configuration to advanced container orchestration and networking, Docker’s guides are designed to help you find your solution with step-by-step walk-throughs.

If it’s been a while since you’ve checked out Docker Docs, you can visit docs.docker.com to find manuals, a getting started guide, along with many new use-case guides to help you with advanced applications including generative AI and security.  

Learners interested in live sessions can register for upcoming live webinars and training on the Docker Training site. There, you will find sessions where you can interact with the Docker support team and discuss best practices for using Docker Scout and Docker Admin.

The role of community in learning

Docker’s community is a vibrant ecosystem of learners, contributors, and innovators. We are thrilled to see the community creating content, hosting workshops, providing mentorship, and enriching the vast array of Docker learning resources. In particular, Docker Captains stand out for their expertise and dedication to sharing knowledge. From James Spurin’s Dive Into Docker course, to Nana Janashia’s Docker Crash Course, to Vladimir Mikhalev’s blog with guided IT solutions using Docker (just to name a few), it’s clear there’s much to learn from within the community.

We encourage developers to join the community and participate in conversations to seek advice, share knowledge, and collaborate on projects. You can also check out the Docker Community forums and join the Slack community to connect with other members of the community.

Conclusion

Docker’s holistic approach to developer learning underscores our commitment to empowering developers with knowledge and skills. By combining our comprehensive documentation and guides with top learning platform partnerships and an active community, we offer developers a robust framework for learning and growth. We encourage you to use all of these resources together to build a solid foundation of knowledge that is enhanced with new perspectives and additional insights as new learning offerings continue to be added.

Whether you’re a novice eager to explore the world of containers or a seasoned pro looking to refine your expertise, Docker’s learning ecosystem is designed to support your journey every step of the way.

Join us in this continuous learning journey, and come learn with Docker.

Learn more

Building a Video Analysis and Transcription Chatbot with the GenAI Stack

28 mars 2024 à 14:32

Videos are full of valuable information, but tools are often needed to help find it. From educational institutions seeking to analyze lectures and tutorials to businesses aiming to understand customer sentiment in video reviews, transcribing and understanding video content is crucial for informed decision-making and innovation. Recently, advancements in AI/ML technologies have made this task more accessible than ever. 

Developing GenAI technologies with Docker opens up endless possibilities for unlocking insights from video content. By leveraging transcription, embeddings, and large language models (LLMs), organizations can gain deeper understanding and make informed decisions using diverse and raw data such as videos. 

In this article, we’ll dive into a video transcription and chat project that leverages the GenAI Stack, along with seamless integration provided by Docker, to streamline video content processing and understanding. 

2400x1260 building next gen video analysis transcription chatbot with genai stack

High-level architecture 

The application’s architecture is designed to facilitate efficient processing and analysis of video content, leveraging cutting-edge AI technologies and containerization for scalability and flexibility. Figure 1 shows an overview of the architecture, which uses Pinecone to store and retrieve the embeddings of video transcriptions. 

Two-part illustration showing “yt-whisper” process on the left, which involves downloading audio, transcribing it using Whisper (an audio transcription system), computing embeddings (mathematical representations of the audio features), and saving those embeddings into Pinecone. On the right side (labeled "dockerbot"), the process includes computing a question embedding, completing a chat with the question combined with provided transcriptions and knowledge, and retrieving relevant transcriptions.
Figure 1: Schematic diagram outlining a two-component system for processing and interacting with video data.

The application’s high-level service architecture includes the following:

  • yt-whisper: A local service, run by Docker Compose, that interacts with the remote OpenAI and Pinecone services. Whisper is an automatic speech recognition (ASR) system developed by OpenAI, representing a significant milestone in AI-driven speech processing. Trained on an extensive dataset of 680,000 hours of multilingual and multitask supervised data sourced from the web, Whisper demonstrates remarkable robustness and accuracy in English speech recognition. 
  • Dockerbot: A local service, run by Docker Compose, that interacts with the remote OpenAI and Pinecone services. The service takes the question of a user, computes a corresponding embedding, and then finds the most relevant transcriptions in the video knowledge database. The transcriptions are then presented to an LLM, which takes the transcriptions and the question and tries to provide an answer based on this information.
  • OpenAI: The OpenAI API provides an LLM service, which is known for its cutting-edge AI and machine learning technologies. In this application, OpenAI’s technology is used to generate transcriptions from audio (using the Whisper model) and to create embeddings for text data, as well as to generate responses to user queries (using GPT and chat completions).
  • Pinecone: A vector database service optimized for similarity search, used for building and deploying large-scale vector search applications. In this application, Pinecone is employed to store and retrieve the embeddings of video transcriptions, enabling efficient and relevant search functionality within the application based on user queries.

Getting started

To get started, complete the following steps:

The application is a chatbot that can answer questions from a video. Additionally, it provides timestamps from the video that can help you find the sources used to answer your question.

Clone the repository 

The next step is to clone the repository:

git clone https://github.com/dockersamples/docker-genai.git

The project contains the following directories and files:

├── docker-genai/
│ ├── docker-bot/
│ ├── yt-whisper/
│ ├── .env.example
│ ├── .gitignore
│ ├── LICENSE
│ ├── README.md
│ └── docker-compose.yaml

Specify your API keys

In the /docker-genai directory, create a text file called .env, and specify your API keys inside. The following snippet shows the contents of the .env.example file that you can refer to as an example.

#-------------------------------------------------------------
# OpenAI
#-------------------------------------------------------------
OPENAI_TOKEN=your-api-key # Replace your-api-key with your personal API key

#-------------------------------------------------------------
# Pinecone
#--------------------------------------------------------------
PINECONE_TOKEN=your-api-key # Replace your-api-key with your personal API key

Build and run the application

In a terminal, change directory to your docker-genai directory and run the following command:

docker compose up --build

Next, Docker Compose builds and runs the application based on the services defined in the docker-compose.yaml file. When the application is running, you’ll see the logs of two services in the terminal.

In the logs, you’ll see the services are exposed on ports 8503 and 8504. The two services are complementary to each other.

The yt-whisper service is running on port 8503. This service feeds the Pinecone database with videos that you want to archive in your knowledge database. The next section explores the yt-whisper service.

Using yt-whisper

The yt-whisper service is a YouTube video processing service that uses the OpenAI Whisper model to generate transcriptions of videos and stores them in a Pinecone database. The following steps outline how to use the service.

Open a browser and access the yt-whisper service at http://localhost:8503. Once the application appears, specify a YouTube video URL in the URL field and select Submit. The example shown in Figure 2 uses a video from David Cardozo.

Screenshot showing example of processed content with "download transcription" option for a video from David Cardozo on how to "Develop ML interactive gpu-workflows with Visual Studio Code, Docker and Docker Hub."
Figure 2: A web interface showcasing processed video content with a feature to download transcriptions.

Submitting a video

The yt-whisper service downloads the audio of the video, then uses Whisper to transcribe it into a WebVTT (*.vtt) format (which you can download). Next, it uses the “text-embedding-3-small” model to create embeddings and finally uploads those embeddings into the Pinecone database.

After the video is processed, a video list appears in the web app that informs you which videos have been indexed in Pinecone. It also provides a button to download the transcript.

Accessing Dockerbot chat service

You can now access the Dockerbot chat service on port 8504 and ask questions about the videos as shown in Figure 3.

Screenshot of Dockerbot interaction with user asking a question about Nvidia containers and Dockerbot responding with links to specific timestamps in the video.
Figure 3: Example of a user asking Dockerbot about NVIDIA containers and the application giving a response with links to specific timestamps in the video.

Conclusion

In this article, we explored the exciting potential of GenAI technologies combined with Docker for unlocking valuable insights from video content. It shows how the integration of cutting-edge AI models like Whisper, coupled with efficient database solutions like Pinecone, empowers organizations to transform raw video data into actionable knowledge. 

Whether you’re an experienced developer or just starting to explore the world of AI, the provided resources and code make it simple to embark on your own video-understanding projects. 

Learn more

Docker Desktop 4.28: Enhanced File Sharing and Security Plus Refined Builds View in Docker Build Cloud

28 février 2024 à 14:00

Docker Desktop 4.28 introduces updates to file-sharing controls, focusing on security and administrative ease. Responding to feedback from our business users, this update brings refined file-sharing capabilities and path allow-listing, aiming to simplify management and enhance security for IT administrators and users alike.

Along with our investments in bringing access to cloud resources within the local Docker Desktop experience with Docker Build Cloud Builds view, this release provides a more efficient and flexible platform for development teams.

Docker Desktop 4.28

Introducing enhanced file-sharing controls in Docker Desktop Business 

As we continue to innovate and elevate the Docker experience for our business customers, we’re thrilled to unveil significant upgrades to the Docker Desktop’s Hardened Desktop feature. Recognizing the importance of administrative control over Docker Desktop settings, we’ve listened to your feedback and are introducing enhancements prioritizing security and ease of use.

For IT administrators and non-admin users, Docker now offers the much-requested capability to specify and manage file-sharing options directly via Settings Management (Figure 1). This includes:

  • Selective file sharing: Choose your preferred file-sharing implementation directly from Settings > General, where you can choose between VirtioFS, gRPC FUSE, or osxfs. VirtioFS is only available for macOS versions 12.5 and above and is turned on by default.
  • Path allow-listing: Precisely control which paths users can share files from, enhancing security and compliance across your organization.
Screenshot of Docker Desktop showing Synchronized file shares page.
Figure 1: Display of Docker Desktop settings enhanced file-sharing settings.

We’ve also reimagined the Settings > Resources > File Sharing interface to enhance your interaction with Docker Desktop (Figure 2). You’ll notice:

  • Clearer error messaging: Quickly understand and rectify issues with enhanced error messages.
  • Intuitive action buttons: Experience a smoother workflow with redesigned action buttons, making your Docker Desktop interactions as straightforward as possible.
Screenshot of Docker Desktop showing Resources page with options for File Sharing, Synchronized file shares, and Virtual sharing.
Figure 2: Displaying settings management in Docker Desktop to notify business subscribers of their access rights.

These enhancements are not just about improving current functionalities; they’re about unlocking new possibilities for your Docker experience. From increased security controls to a more navigable interface, every update is designed with your efficiency in mind.

Refining development with Docker Desktop’s Builds view update 

Docker Desktop’s previous update introduced Docker Build Cloud integration, aimed at reducing build times and improving build management. In this release, we’re landing incremental updates that refine the Builds view, making it easier and faster to manage your builds.

New in Docker Desktop 4.28:

  • Dedicated tabs: Separates active from completed builds for better organization (Figure 3).
  • Build insights: Displays build duration and cache steps, offering more clarity on the build process.
  • Reliability fixes: Resolves issues with updates for a more consistent experience.
  • UI improvements: Updates the empty state view for a clearer dashboard experience (Figure 4).

These updates are designed to streamline the build management process within Docker Desktop, leveraging Docker Build Cloud for more efficient builds.

Screenshot of Builds view showing tabs for Build history and Active builds.
Figure 3: Dedicated tabs for Build history vs. Active builds to allow more space for inspecting your builds.
Screenshot of Builds view with Active builds tab selected and showing "No builds currently active".
Figure 4: Updated view supporting empty state — no Active builds.

To explore how Docker Desktop and Docker Build Cloud can optimize your development workflow, read our Docker Build Cloud blog post. Experience the latest Builds view update to further enrich your local, hybrid, and cloud-native development journey.

These Docker Desktop updates support improved platform security and a better user experience. By introducing more detailed file-sharing controls, we aim to provide developers with a more straightforward administration experience and secure environment. As we move forward, we remain dedicated to refining Docker Desktop to meet the evolving needs of our users and organizations, enhancing their development workflows and agility to innovate.

Join the conversation and make your mark

Dive into the dialogue and contribute to the evolution of Docker Desktop. Use our feedback form to share your thoughts and let us know how to improve the Hardened Desktop features. Your input directly influences the development roadmap, ensuring Docker Desktop meets and exceeds our community and customers’ needs.

Learn more

8 Top Docker Tips & Tricks for 2024

4 janvier 2024 à 13:09

This post was contributed by Docker Captain Vladimir Mikhalev.

Happy New Year, Docker Fans! I hope your 2024 is off to a great start. Whether you’re a Docker expert or new to the Docker community, you may be wondering about the best ways to optimize or get started quicker on Docker. As a Docker Captain and a Senior DevOps Engineer, I’ve been using Docker for more than six years, and I’m looking forward to some thrilling updates in 2024!  

In this post, I’m excited to share my top 8 tips and tricks for Docker that I’ve gathered through real-world experience and insider knowledge.

banner docker tips

Supercharge productivity with Docker

1. Enable VirtioFS for faster file sharing on Macs.

Remember the days of sluggish file sharing in Docker on Mac? We’d be wrestling with heavy file I/O operations, watching the clock as each sync dragged on. It wasn’t just a test of patience; it was a real bottleneck in our workflow.

But here’s the good news: With Docker Desktop for Mac 4.6, that’s history. Just head over to Settings > General and select VirtioFS.

Select VirtoFS under Settings > General.
Figure 1: Select VirtoFS under Settings > General.

The performance leap is something you have to experience to believe. Everything feels snappier, whether building, running, or updating containerized apps. It’s a breath of fresh air for those of us in fast-paced dev environments where every second counts.

This upgrade has been a massive win for productivity, and it’s just one of the many reasons I’m excited about Docker’s direction in 2024. These kinds of improvements make Docker not just a tool but a powerful ally in our development arsenal.

2. Strategically layer to optimize the Docker Build cache.

Let’s talk about Dockerfile efficiency – something I’ve wrestled with more times than I can count. Back in the day, Docker builds could feel like a slow dance. You make a small change in your code, and wait for what feels like an eternity for the build to complete. It was a frequent frustration, especially when you’re iterating rapidly and need to test a small change. The problem? Our Dockerfiles weren’t optimized for efficient caching, leading to unnecessary rebuilds and time wasted.

Here’s a trick I learned: Strategic layering in your Dockerfile can turn the tide. Place those instructions that don’t change often, like installing dependencies, right at the top. Then, put your COPY or ADD commands for your application code lower down. 

This structure is a game-changer. It means Docker can reuse cached layers for the top parts of your Dockerfile, and you’re only rebuilding what’s actually changed. The result? Your build times get slashed, and you spend more time coding and less time waiting.

Another lifesaver is using RUN --mount type=cache when installing packages. This little gem keeps your package cache intact between builds. No more re-downloading the entire internet every time you build your image. It’s especially handy when you’re working with large dependencies. Implement this, and watch your build efficiency go through the roof.

To give you a better idea, here’s how you might apply these principles in a Dockerfile for a Node.js application:

# Use an official Node base image
FROM node:14

# Install dependencies first to leverage Docker cache
COPY package.json package-lock.json ./

# Using cache mount for npm install, so unchanged packages aren’t downloaded every time
RUN --mount=type=cache,target=/root/.npm \
    npm install

# Copy the rest of your app's source code
COPY . .

# Your app's start command
CMD ["npm", "start"]

This example Dockerfile demonstrates the strategic layering and RUN cache usage in action, showcasing how these practices can significantly optimize your Docker builds.

Adopting these practices transformed my Docker experience. No more watching the spinner while Docker rebuilds the world. Instead, it’s quick iterations, fast feedback, and more productivity. And honestly, that’s the kind of efficiency we live for in our line of work.

3. Avoid the bloat to keep builds efficient. 

In the earlier days of Docker, the sheer size of our builds often tripped me up. It was like packing your entire house for a weekend trip. I’d end up sending tons of unnecessary files to the Docker daemon, resulting in bloated build contexts and painfully slow build times. Not exactly ideal when you’re trying to keep things lean and agile.

The key? Getting smarter with what to include in the build context. In your .dockerignore, specify only the essentials – leave out anything that doesn’t contribute to your final image. This approach is like packing a well-organized suitcase and bringing only what you need. The benefit is twofold: You speed up the build process and reduce resource consumption by sending less data to the Docker daemon. It’s a straightforward yet powerful tweak that has saved us countless hours.

Another game-changer has been adopting multi-stage builds in our Dockerfiles. Imagine building a complex app and having to include all the build tools and dependencies in your final image. It’s like taking the construction crew with you after building your house. Instead, with multi-stage builds, you compile and build everything in an initial stage, and then, in a separate stage, you copy over just the necessary artifacts. This results in a much leaner, more efficient final image. It’s not only good practice for keeping image sizes down, but it also means quicker deployments and reduced storage costs.

Implementing these methods transformed how we handle Docker builds. Your builds are faster, your deployments are smoother, and your entire workflow just feels more streamlined.

4. Kickstart your projects with Docker Init.

Remember the old days when starting a new Docker project felt like navigating a maze? We’d often find ourselves fumbling through the initial setup – creating a Dockerfile, figuring out what to include in .dockerignore, setting up compose.yaml, and so on. 

For Docker newbies, this was daunting. Even for seasoned pros, it was a repetitive chore that ate into valuable time. Each new project was like reinventing the wheel; frankly, we had more important things to focus on, like actual coding.

Enter Docker Init. This feature has been a lifesaver for streamlining project setups. It’s like having a personal assistant to handle the groundwork of a new Docker project. 

Just run docker init, and voilà, it sets up the essential scaffolding for your project. You get a .dockerignore to keep unwanted files out, a Dockerfile tailored to your project’s needs, a compose.yaml for managing multi-container setups and even a README.Docker.md for documentation. 

The best part? It’s customizable. For instance, if you’re working on a Node.js app, Docker Init won’t just give you a generic Dockerfile; it’ll tailor it to fit the Node environment and dependencies. This means less tweaking and more doing. It’s not just about saving time; it’s about starting off on the right foot — no more guesswork or boilerplate code. You’re set up for success right from the get-go.

Docker Init has changed the game for us. What used to be a tedious start to every project is now a smooth, streamlined process. It’s like having a launchpad for your Docker projects, ready to take you straight into the heart of development without the initial hassle.

5. Proactively find and fix software vulnerabilities with Docker Scout.

In our constant quest for robust and secure applications, we’ve often encountered a common snag in the DevOps world – keeping a vigilant eye on vulnerabilities across multiple repositories. It’s like trying to keep track of a dozen moving targets simultaneously. Pre-Docker Scout days, this was a cumbersome task, often leading to oversights and last-minute scrambles to address security gaps.

But here’s where Docker Scout shines, and it’s not just about its powerful ability to detect vulnerabilities. Docker Scout provides a comprehensive, eagle-eyed watch over our entire repository landscape. Since we’ve made Docker Scout an integral part of our workflow, we have increased confidence across our teams and stages that we’re delivering a secure final product.

We started by setting up Docker Scout across all our repositories. (Check out the Docker quickstart guide.) It’s like deploying a network of sentinels, each tasked with keeping a watchful eye on a specific territory. The setup process was straightforward, and once in place, Scout began providing ongoing visibility into the security status of our repositories.

What I particularly appreciate about Docker Scout is its ongoing visibility feature. It’s like having a dashboard that constantly updates with the latest security intel. We’re not just talking about identifying vulnerabilities; we’re talking about a tool that gives us real-time insights, keeping us informed and ready to act.

And when Docker Scout flags an issue, it doesn’t just leave us hanging with a problem. It guides us through the remediation process. This aspect has been a game-changer. It’s like having an expert by your side, suggesting the best course of action, whether it’s updating a package or reconfiguring a setting. Having that level of guidance is empowering and transforms how we approach security from reactive to proactive.

Integrating Docker Scout in this expansive manner has revolutionized our approach to securing our software supply chain. It’s no longer a check-box activity; it’s an integral part of our DevOps culture. The peace of mind that comes from knowing you have a comprehensive security net over your entire application landscape? Priceless.

Incorporating Docker Scout this way has enhanced our security posture and fundamentally shifted our approach, making a secure software supply chain a seamlessly integrated aspect of our development lifecycle.

Try Docker Scout for yourself.

6. Accelerate your development with Docker Build Cloud.

Imagine you’re working on a Docker project, and each build feels like a long road trip in heavy traffic. Traditional local Docker builds, particularly for substantial projects, can be frustratingly slow and resource-intensive. You’re there, watching the progress bar crawl while your machine groans under the load. It’s like trying to run a race with weights tied to your feet. And let’s not forget the uneven playing field – developers with high-end machines breeze through builds while others with modest setups endure a sluggish pace. This disparity often leads to the infamous “works on my machine” syndrome, creating a rift in the development process.

Enter Docker Build Cloud, a game-changer that’s like swapping out your heavy backpack for a jetpack. By offloading the build process to the cloud, Docker Build Cloud provides a consistent, high-speed build environment for all developers, regardless of their local hardware. It’s the equivalent of giving every developer in your team a top-of-the-line workstation for building their Docker images.

Optimizing your Dockerfiles for cloud-based builds is key to harnessing the full potential of Docker Build Cloud. Structuring Dockerfile commands for maximum layer caching efficiency and minimizing the build context size are crucial steps. It’s about arranging your Dockerfile instructions to leverage shared caches and parallel build capabilities, akin to streamlining your development process for maximum efficiency. I recall a time when reorganizing our Dockerfile structure reduced the build time of a significant project by half, transforming a cumbersome process into a swift and efficient one.

Monitoring build times and cache usage is equally crucial. By keeping a close eye on these aspects, you can pinpoint any inefficiencies or bottlenecks, allowing for timely tweaks and adjustments. During one of our high-traffic periods, we noticed a spike in build times. By analyzing cache usage and build patterns, we identified a misconfigured step in our Dockerfile, which, once resolved, brought our build times back to optimal levels.

Embracing Docker Build Cloud marks a significant shift in your development workflow. It’s not just about speeding up builds; it’s about creating a harmonious and efficient development environment. Implementing multi-stage builds and regularly updating base images have further streamlined our processes, ensuring that our builds are not only fast but also secure and up-to-date.

Your team can now enjoy quick iterations and efficient resource utilization, elevating productivity to new heights. Docker Build Cloud transforms the building process from a chore into an experience marked by speed and efficiency, ensuring that your projects are not just built but crafted swiftly and seamlessly in a state-of-the-art cloud environment. This shift to Docker Build Cloud is more than an upgrade; it’s a new way of thinking about Docker builds, aligning perfectly with the agility and dynamism needed in modern software development.

7. Resolve code issues faster with Docker Debug.

Troubleshooting sometimes feels like trying to solve a puzzle with missing pieces. You’ve likely been there – a bug shows up, and you’re diving deep into logs and configurations, trying to replicate the issue. It’s a bit like detective work, where every clue matters, but you’re not quite sure where the next clue is. This process can be time-consuming and, frankly, a bit of a headache, especially when the issues are elusive or environment-specific.

But here’s where Docker Debug steps in and changes the game. It’s like being handed a magnifying glass and a detailed map when you’re in the midst of a complicated treasure hunt. Docker Debug is an upgrade to Docker Build that brings a suite of troubleshooting tools to your fingertips. It’s designed to make the debugging process less of a trial-and-error journey and more of a straight path to solutions.

Integrating Docker Debug into your regular debugging process is like adding a new set of high-tech tools to your toolkit. You get features for both local and remote debugging, which are invaluable when you’re dealing with issues that are hard to pin down. For instance, the ability to view logs in real-time or execute commands within containers is like having a direct line to what’s happening inside your Docker environment. This direct access means you can see exactly what’s going wrong and where rather than making educated guesses.

Using Docker Debug helps you replicate and diagnose issues in environments that mimic both local and production settings. This versatility is crucial because a bug that pops up in a production environment might not always show in a local one and vice versa. It’s akin to having the ability to test your car on both race tracks and city roads – you get a complete picture of performance across different conditions.

Implementing structured logging in your applications, for instance, turns your logs into a coherent story, making it easier for Docker Debug to guide you to the heart of the problem. Regularly performing health checks on your containers using Docker Debug’s tools is akin to having a routine check-up, ensuring everything runs smoothly.

When you face a network issue or a memory leak, Docker Debug becomes your go-to tool. It allows you to replicate the exact environment and dive deep into the container, inspecting processes, network connections, or even running a debugger on the application process. It’s like having a surgical tool to dissect and understand your application’s behavior under various conditions.

The natural beauty of Docker Debug lies in its ability to lead to quicker resolutions of complex issues. You’re not just looking at surface-level symptoms; you’re able to dive deep and understand the root causes. It’s essentially giving you an X-ray vision for your Docker projects. No more prolonged downtime or lengthy bug hunts; with Docker Debug, you’re equipped to identify, understand, and resolve issues with precision and speed.

In essence, incorporating Docker Debug into your workflow is more than just an upgrade; it’s a transformative step towards more efficient, effective, and less stressful troubleshooting. It’s about turning what used to be a daunting task into a more manageable, even straightforward, part of your development process. With Docker Debug, you’re not only fixing issues faster, but you’re also gaining insights that can prevent these issues from happening in the first place. It’s a strategic move that elevates your Docker game, ensuring your projects are functional, robust, and resilient.

8. Test against real instances with Testcontainers.

Testing in the world of Docker can often feel like navigating through a dense forest with just a compass. You’re trying your best to simulate real-world conditions, but there’s always that feeling that something’s missing. It’s like preparing for a marathon on a treadmill – useful, but not quite the same as hitting the pavement.

Enter Testcontainers, a lifesaver that’s turned our testing approach on its head, especially with Docker’s acquisition of AtomicJar. Imagine having the ability to spin up real databases, message brokers, or any other service your app interacts with, all within your test suite. It’s like suddenly having access to a full-scale rehearsal studio instead of practicing in your garage.

Testcontainers allow us to bring production-like environments right into our automated tests. We’re talking about spinning up a PostgreSQL container for database tests or RabbitMQ for messaging. This shift has been monumental – we’re now testing under conditions that closely mirror what we’ll encounter in production.

We’ve seamlessly integrated Testcontainers into our CI/CD pipeline. This means every build is tested against real instances, ensuring that the tests passing on a developer’s machine will pass in production, too. It’s akin to having an all-weather test track available any time we need it.

Let me paint a picture with a real scenario we faced. We had this intermittent issue where everything worked fine in development but fell apart in production. Sounds familiar? We set up Testcontainers with the same version of the database as in production, and suddenly, the problem was reproducible. And diagnosable. And fixable. It was the kind of turning point that transforms night-long debugging sessions into quick fixes.

Embracing Testcontainers is more than just adopting a new tool; it’s a paradigm shift in how we do testing. It ensures that our tests are not just passing but passing in a way that gives us confidence about how they’ll behave in the real world.

So, my fellow Docker aficionados, if you haven’t already, dive into the world of Testcontainers. It’s not just about making your tests more reliable; it’s about making your entire development lifecycle more predictable, efficient, and aligned with the realities of production environments. It’s one of those tools that, once you start using, you’ll wonder how you ever managed without it.

Get started with Testcontainers and see what you think.

Conclusion

These are the top tips and tricks that have revolutionized the way my team and I use Docker. Whether you’re just starting out or you’ve been in the Docker game for a while, I hope these insights help you as much as they’ve helped us. 

If you’re the kind of developer who wants to be the first to hear about new features and help improve the Docker experience, sign up to be part of the Developer Preview Program.  You can also join the community Slack, where you can chat with other Docker developers and share your own tips and tricks!

We wish you a happy 2024! Keep experimenting, and happy Dockerizing!

Learn more

Docker 2023: Milestones, Updates, and What’s Next

20 décembre 2023 à 14:07

We’ve had an exciting year at Docker, with loads of product news and announcements. Don’t worry if you couldn’t keep up with the pace of our news and product releases. We’ve rounded up highlights from 2023 and look ahead to how we plan to stay the #1 most-used developer tool as we roll into 2024.

banner what you mightve missed from docker in 2023

Docker milestones & performance improvements

Docker Desktop updates

We’ve been hard at work enhancing Docker Desktop this year. Among the notable highlights:

Performance milestones

Read “Docker’s Journey Toward Enabling Lightning-Fast Developer Innovation: Unveiling Performance Milestones” to learn about:

  • 75% startup time speed improvements
  • 85x improvement in upload speed
  • 650% improvement in image download speeds
  • 71% reduction in build time
  • Resource saver mode saves 38,500 CPU hours daily. 

Download the latest Docker Desktop release to take advantage of the performance improvements.

Simplifying software supply chain management

We’ve simplified software supply chain management for developers with Docker Scout. Docker Scout policies enable teams to identify, prioritize, and fix their software quality issues at the point of creation to meet their organization’s reliability and security standards while accelerating the speed of execution and innovation. 

Learn how to achieve security and compliance goals with policy guardrails in Docker Scout. Visit the Docker Scout product page to learn more.

20 new Docker extensions

Twenty new Docker extensions were added to the Docker extension marketplace in 2023. We highlighted a few extensions on the Docker blog, including Kubescape, NebulaGraph, Gefyra, LocalStack, and Grafana. Explore Docker Hub to discover more extensions, and use the Docker Extensions SDK to create and share your own.

New Docker features 

We also announced:

All things AI/ML

2023 will be known as the year of AI/ML. For 2024, our investments in AI promise to bring new services and functionality to Docker customers. Recent announcements include:

Also check out our blog post “Why Are There More Than 100 Million Pull Requests for AI/ML Images on Docker Hub?” to learn how Docker is providing a powerful tool for AI/ML development.

Expanding developer experiences

AtomicJar joins Docker

In December, we were excited to welcome AtomicJar, the makers of Testcontainers, to the Docker family. “Docker already accelerates the ‘inner loop’ app development steps — build, verify (through Docker Scout), run, debug, and share — and now, with AtomicJar and Testcontainers, we’re adding ‘test,’” explains Docker CEO Scott Johnston. As a result, developers using Docker will be able to deliver quality applications faster and with less effort. Read our announcement blog post and FAQ to learn more about AtomicJar and Testcontainers.

Mutagen joins Docker

In June, we announced the acquisition of Mutagen, the company behind the open source Mutagen file synchronization and networking technologies that enable high-performance remote development. The Mutagen File Sync feature of Docker Desktop takes file sharing to new heights with up to a 16.5x improvement in performance. To try it and help influence Docker’s future, sign up for the Docker Desktop Preview Program.

Microsoft Dev Box and Docker Desktop

We announced our partnership with the Microsoft Dev Box team to bring additional benefits to developer onboarding, environment set-up, security, and administration with Docker Desktop. You can navigate to the Azure Marketplace to download the Docker Desktop-Dev Box compatible image and start developing in the cloud with a native experience. Additionally, this image can be activated with your current subscription, or you can buy a Docker Business subscription directly on Azure Marketplace.

Docker and Snowflake collaboration

At Snowflake BUILD, we announced Docker Desktop with Snowpark Container Services (private preview). Watch the session to learn more about accelerating deployments of data workloads with Docker and Snowpark. 

Docker in action

Customer highlights from 2023 include:

What’s next

In October at DockerCon, Docker and Udemy announced a partnership to offer developers accessible learning paths to further their Docker education. Read the announcement blog post to learn more about what we’ve planned.

Want to dive deeper into Docker? DockerCon videos are available now on YouTube. 

Do your New Year goals include expanding your Docker expertise? Watch the on-demand webinar Docker Fundamentals: Get the Most Out of Docker.

Check out our public roadmap to help steer the future of Docker.

Thank you to our community of developers, Docker Captains and Community Leaders, customers, and partners! We look forward to our continued work building our future together in the New Year. 

Learn more

💾

Learn how Docker Scout provides actionable insights into the software supply chain with real-time vulnerability identification, remediation recommendations, ...

Announcing the Docker AI/ML Hackathon 2023 Winners

5 décembre 2023 à 16:44

The week of DockerCon 2023 in Los Angeles, we announced the kick-off of the Docker AI/ML Hackathon. The hackathon ran as a virtual event from October 3 to November 7 with support from partners including DataStax, Livecycle, Navan.ai, Neo4j, and OctoML. Leading up to the submission deadline, we ran a series of webinars on topics ranging from getting started with Docker Hub to setting up computer vision AI models on Docker, and more. You can watch the collection of webinars on YouTube.

banner hackathon announcement winners

The Docker AI/ML Hackathon encouraged participants to build solutions that were innovative, applicable in real life, use Docker technology, and have an impact on developer productivity. We made a lot of announcements at DockerCon, including the new GenAI Stack, and we couldn’t wait to see how developers would put this to work in their projects.  

Participants competed for US$ 20,000 in cash prizes and exclusive Docker swag. Judging was based on criteria such as applicability, innovativeness, incorporation of Docker tooling, and impact on the developer experience and productivity. Read on to learn who took home the top prizes.

The winners

1st place

Signal0ne — This project automates insights from failed containers and anomalous resource usage through anomaly detection algorithms and a Docker desktop extension. Developed using Python and Angular, the Signal0ne tool provides rapid, accurate log analysis, even enabling self-debugging. The project’s key achievements include quick issue resolution for experienced engineers and enhanced debugging capabilities for less experienced ones.

2nd place

SeamlessML: Docker-Powered Serverless Model Orchestration — SeamlessML addresses the AI model deployment bottleneck by providing a simplified, scalable, and cost-effective solution. Leveraging Docker and serverless technologies, it enables easy deployment of machine learning models as scalable API endpoints, abstracting away complexities like server management and load balancing. The team successfully reduced deployment time from hours to minutes and created a local testing setup for confident cloud-like deployments.

3rd place

Dionysus — Dionysus is a developer collaboration platform that streamlines teamwork through automatic code documentation, efficient codebase search, and AI-powered meeting transcription. Built with a microservice architecture using NextJS for the frontend and a Python backend API, Docker containerization, and integration with GitHub, Dionysus simplifies development workflows. The team overcame challenges in integrating AI effectively, ensuring real-time updates and creating a user-friendly interface, resulting in a tool that automates code documentation, facilitates contextual code search, and provides real-time AI-driven meeting transcription.

Honorable mentions

The following winners took home swag prizes. We received so many fantastic submissions that we awarded honorable mentions to four more teams than originally planned!

What’s next?

Check out all project submissions on the Docker AI/ML Hackathon gallery page. Also, check out and contribute to the GenAI Stack project on GitHub and sign up to join the Docker AI Early Access program. We can’t wait to see what projects you create.

We had so much fun seeing the creativity that came from this hackathon. Stay tuned until the next one!

Learn more

Docker State of Application Development Survey 2023: Share Your Thoughts on Development

Par : Jake Levirne
20 octobre 2023 à 13:38

Welcome to the second annual Docker State of Application Development survey!

Please help us better understand and serve the developer community with just 20 minutes of your time. We want to know where developers are focused, what they’re working on, and what is most important to them. Your participation and input will help us build the best products and experiences for you.

Docker logo in white box surrounded by simple chart and graph icons

For example, in Docker’s 2022 State of Application Development Survey, we found that the task for which Docker users most often refer to support/documentation was creating a Dockerfile (reported by 60% of respondents). Among other improvements, this finding helped spur the innovation of Docker AI.

We also found that 59% of respondents use Udemy for online courses and certifications, so we have partnered with Udemy to make learning and using Docker the best and most streamlined experience possible.

Take the Docker State of Application Development survey now!

By participating in the survey, you will be entered into a raffle for a chance to win* one of the following prizes:

The survey is open from October 20, 2023 (7AM PST) to November 20, 2023 (11:59PM PST)

We’ll choose the winners randomly from those who complete the survey with meaningful answers. Winners will be notified via email on December 11, 2023.

The Docker State of Application Development survey only takes about 20 minutes to complete. We appreciate every contribution and opinion. Your voice counts!


*Docker State of Application Development Promotion Official Rules.

Announcing Udemy + Docker Partnership

Par : Walker Stone
4 octobre 2023 à 16:25

Docker and Udemy announced a new partnership at DockerCon to give developers a clear, defined, accessible path for learning how to use Docker, best practices, advanced concepts, and everything in between. As the #1 rated online course platform for learning how to code (as ranked by Stack Overflow), Udemy will help to supply Docker’s 20 million active developers with specific course content and customized learning paths, ensuring they have access to the latest training materials on how to best use Docker tools.

banner dockercon23 udemy partnership

Instructors on Udemy have in-depth knowledge and experience about Docker’s suite of development tools, services, trusted content, and automations. As a leading destination for online learning and skill development, Udemy offers course content that is accessible, inclusive, and attainable for a broad range of developers. This Docker + Udemy partnership will establish a key destination for developers and hobbyists who want to further their Docker education. Together, Docker and Udemy will enhance their communities with shared standards, education paths, and credibility.

This partnership will bring Docker educational content together into easy-to-navigate learning paths to help developers prepare for future certification exams to demonstrate skills mastery. Additionally, this platform aims to create a streamlined way for developers to gain knowledge, receive badges, and stay current on the latest content, including faster access to trainings on new Docker features, by inviting Udemy instructors to become Docker Certified Instructors where they can preview new features and prepare training content for distribution the moment a new update goes live.

These courses and their curricula will also be vetted by Docker and experts from the Docker community. And, in the true spirit of open source, these curricula will be made publicly available for all content creators to use.  

In the coming months, we will invite members of the Docker community who are experienced instructors and content creators to create Docker courses on Udemy, or bring their existing content into our learning paths. We are thrilled to be able to bring our community into this endeavor and to amplify visibility for the community.

Stay tuned for more details on this partnership soon. To get started today and gain access to Udemy’s collection of more than 350 Docker courses, developers can visit: https://www.udemy.com/topic/docker/.

Learn more

Announcing Docker AI/ML Hackathon 

3 octobre 2023 à 15:59

With the return of DockerCon, held October 4-5 in Los Angeles, we’re excited to announce the kick-off of a Docker AI/ML Hackathon. Join us at DockerCon — in-person or virtually — to learn about the latest Docker product announcements. Then, bring your innovative artificial intelligence (AI) and machine learning (ML) solutions to life in the hackathon for a chance to win cool prizes.

The Docker AI/ML Hackathon is open from October 3 – November 7, 2023. DockerCon in-person attendees are invited to the dedicated hackspace, where you can chat with fellow developers, Dockhands, and our partners Datastax, Navan.ai, Neo4J, OctoML, and Ollama

We’ll also host virtual webinars, Q&A, and engaging chats throughout the next five weeks to keep the ideas flowing.

Register for the Docker AI/ML Hackathon to participate and to be notified of event activities.

banner dockercon23 hackathon

Hackathon tips

Docker AI/ML Hackathon participants are encouraged to build solutions that are innovative, applicable in real life, use Docker technology, and have an impact on developer productivity.  Submissions can also be non-code proof-of-concepts, extensions that improve Docker workflows, or integrations to improve existing AI/ML solutions.  

Solutions should be AI/ML projects or models built using Docker technology and distributed through DockerHub, AI/ML integrations into Docker products that improve the developer experience, or extensions of Docker products that make working with AI/ML more productive.

Submissions should be a working application or a non-code proof of concept. We would like to see submissions as close to a real-world implementation as possible, but we will accept submissions that are not fully functional with a strong proof of concept. Additionally, all submissions should include a 3-5 minute video that showcases the hack along with background and context (we will not judge the submission on the quality or editing of the video itself). 

After submitting your solution, you’ll be in the running for $20,000 in cash prizes and exclusive Docker swag. Judging will be based on criteria such as the applicability of the solution, innovativeness of the solution, incorporation of Docker tooling, and impact on the developer experience and productivity.

Get started 

Follow the #DockerHackathon hashtag on social media platforms and join the Docker AI/ML Hackathon Slack channel to connect with other participants.

Check out the site for full details about the Docker AI/ML Hackathon and register to start hacking today! 

Submissions close on November 7, 2023, at 5 PM Pacific Time (November 8 at 1 AM UTC).

Learn more

❌
❌