Vue normale
-
Collabnix
- Master Terraform: Your Essential Toolbox for a Clean, Secure, and Scalable InfrastructureTerraform is an open-source Infrastructure as Code (IaC) tool from HashiCorp that allows you to define and provision infrastructure using configuration files, enabling automation and management of resources across various cloud providers and on-premises environments.Just for you to be updated,FYI, IBM acquired HashiCorp, the creator of Terraform, in a deal valued at $6.4 billion, which […]
-
Docker
- Powered by Docker: Streamlining Engineering Operations as a Platform EngineerThe Powered by Docker is a series of blog posts featuring use cases and success stories from Docker partners and practitioners. This story was contributed by Neal Patel from Siimpl.io. Neal has more than ten years of experience developing software and is a Docker Captain. Background As a platform engineer at a mid-size startup, I’m responsible for identifying bottlenecks and developing solutions to streamline engineering operations to keep up with the velocity and scale of the engineering
Powered by Docker: Streamlining Engineering Operations as a Platform Engineer
The Powered by Docker is a series of blog posts featuring use cases and success stories from Docker partners and practitioners. This story was contributed by Neal Patel from Siimpl.io. Neal has more than ten years of experience developing software and is a Docker Captain.
Background
As a platform engineer at a mid-size startup, I’m responsible for identifying bottlenecks and developing solutions to streamline engineering operations to keep up with the velocity and scale of the engineering organization. In this post, I outline some of the challenges we faced with one of our clients, how we addressed them, and provide guides on how to tackle these challenges at your company.
One of our clients faced critical engineering challenges, including poor synchronization between development and CI/CD environments, slow incident response due to inadequate rollback mechanisms, and fragmented telemetry tools that delayed issue resolution. Siimpl implemented strategic solutions to enhance development efficiency, improve system reliability, and streamline observability, turning obstacles into opportunities for growth.
Let’s walk through the primary challenges we encountered.
Inefficient development and deployment
- Problem: We lacked parity between developer tooling and CI/CD tooling, which made it difficult for engineers to test changes confidently.
- Goal: We needed to ensure consistent environments across development, testing, and production.
Unreliable incident response
- Problem: If a rollback was necessary, we did not have the proper infrastructure to accomplish this efficiently.
- Goal: We wanted to revert to stable versions in case of deployment issues easily.
Lack of comprehensive telemetry
- Problem: Our SRE team created tooling to simplify collecting and publishing telemetry, but distribution and upgradability were poor. Also, we found adoption to be extremely low.
- Goal: We needed to standardize how we configure telemetry collection, and simplify the configuration of auto-instrumentation libraries so the developer experience is turnkey.
Solution: Efficient development and deployment

CI/CD configuration with self-hosted GitHub runners and Docker Buildx
We had a requirement for multi-architecture support (arm64/amd64), which we initially implemented in CI/CD with Docker Buildx and QEMU. However, we noticed an extreme dip in performance due to the emulated architecture build times.
We were able to reduce build times by almost 90% by ditching QEMU (emulated builds), and targeting arm64 and amd64 self-hosted runners. This gave us the advantage of blazing-fast native architecture builds, but still allowed us to support multi-arch by publishing the manifest after-the-fact.
Here’s a working example of the solution we will walk through: https://github.com/siimpl/multi-architecture-cicd
If you’d like to deploy this yourself, there’s a guide in the README.md.
Prerequisites
This project uses the following tools:
- Docker Build Cloud (included in all Docker paid subscriptions.)
- DBC cloud driver
- GitHub/GitHub Actions
- A managed container orchestration service like Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), or Google Kubernetes Engine (GKE)
- Terraform
- Helm
Because this project uses industry-standard tooling like Terraform, Kubernetes, and Helm, it can be easily adapted to any CI/CD or cloud solution you need.
Key features
The secret sauce of this solution is provisioning the self-hosted runners in a way that allows our CI/CD to specify which architecture to execute the build on.
The first step is to provision two node pools — an amd64 node pool and an arm64 node pool, which can be found in the aks.tf. In this example, the node_count is fixed at 1 for both node pools but for better scalability/flexibility you can also enable autoscaling for a dynamic pool.
resource "azurerm_kubernetes_cluster_node_pool" "amd64" {
name = "amd64pool"
kubernetes_cluster_id = azurerm_kubernetes_cluster.cicd.id
vm_size = "Standard_DS2_v2" # AMD-based instance
node_count = 1
os_type = "Linux"
tags = {
environment = "dev"
}
}
resource "azurerm_kubernetes_cluster_node_pool" "arm64" {
name = "arm64pool"
kubernetes_cluster_id = azurerm_kubernetes_cluster.cicd.id
vm_size = "Standard_D4ps_v5" # ARM-based instance
node_count = 1
os_type = "Linux"
tags = {
environment = "dev"
}
}
Next, we need to update the self-hosted runners’ values.yaml to have a configurable nodeSelector. This will allow us to deploy one runner scale set to the arm64pool and one to the amd64pool.
Once the Terraform resources are successfully created, the runners should be registered to the organization or repository you specified in the GitHub config URL. We can now update the REGISTRY values for the emulated-build and the native-build.
After creating a pull request with those changes, navigate to the Actions tab to witness the results.

You should see two jobs kick off, one using the emulated build path with QEMU, and the other using the self-hosted runners for native node builds. Depending on cache hits or the Dockerfile being built, the performance improvements can be up to 90%. Even with this substantial improvement, utilizing Docker Build Cloud can improve performance 95%. More importantly, you can reap the benefits during development builds! Take a look at the docker-build-cloud.yml workflow for more details. All you need is a Docker Build Cloud subscription and a cloud driver to take advantage of the improved pipeline.
Getting Started
1. Generate GitHub PAT
2. Update the variables.tf
3. Initialise AZ CLI
4. Deploy Cluster
5. Create a PR to validate pipelines
README.md for reference
Reliable Incident Response
Leveraging SemVer Tagged Containers for Easy Rollback
Recognizing that deployment issues can arise unexpectedly, we needed a mechanism to quickly and reliably rollback production deployments. Below is an example workflow for properly rolling back a deployment based on the tagging strategy we implemented above.
- Rollback Process:
- In case of a problematic build, deployment was rolled back to a previous stable version using the tagged images.
- AWS CLI commands were used to update ECS services with the desired image tag:
on:
workflow_call:
inputs:
image-version:
required: true
type: string
jobs:
rollback:
runs-on: ubuntu-latest
permissions:
id-token: write
context: read
steps:
- name: Rollback to previous version
run: |
aws ecs update-service --cluster my-cluster --service my-service --force-new-deployment --image ${{ secrets.REGISTRY }}/myapp:${{ inputs.image-version }}
Comprehensive Telemetry
Configuring Sidecar Containers in ECS for Aggregating/Publishing Telemetry Data (OTEL)
As we adopted a OpenTelemetry to standardize observability, we quickly realized that adoption was one of the toughest hurdles. As a team, we decided to bake in as much configuration as possible into the infrastructure (Terraform modules) so that we could easily distribute and maintain observability instrumentation.
- Sidecar Container Setup:
- Sidecar containers were defined in the ECS task definitions to run OpenTelemetry collectors.
- The collectors were configured to aggregate and publish telemetry data from the application containers.
- Task Definition Example:
{
"containerDefinitions": [
{
"name": "myapp",
"image": "myapp:1.0.0",
"essential": true,
"portMappings": [{ "containerPort": 8080 }]
},
{
"name": "otel-collector",
"image": "otel/opentelemetry-collector:latest",
"essential": false,
"portMappings": [{ "containerPort": 4317 }],
"environment": [
{ "name": "OTEL_RESOURCE_ATTRIBUTES", "value": "service.name=myapp" }
]
}
],
"family": "my-task"
}
Configuring Multi-Stage Dockerfiles for OpenTelemetry Auto-Instrumentation Libraries (Node.js)
At the application level, configuring the auto-instrumentation posed a challenge since most applications varied in their build process. By leveraging multi-stage Dockerfiles, we were able to help standardize the way we initialized the auto-instrumentation libraries across microservices. We were primarily a nodejs shop, so below is an example Dockerfile for that.
- Multi-Stage Dockerfile:
- The Dockerfile is divided into stages to separate the build environment from the final runtime environment, ensuring a clean and efficient image.
- OpenTelemetry libraries are installed in the build stage and copied to the runtime stage:
# Stage 1: Build stage
FROM node:20 AS build
WORKDIR /app
COPY package.json package-lock.json ./
# package.json defines otel libs (ex. @opentelemetry/node @opentelemetry/tracing)
RUN npm install
COPY . .
RUN npm run build
# Stage 2: Runtime stage
FROM node:20
WORKDIR /app
COPY --from=build /app /app
CMD ["node", "dist/index.js"]
Results
By addressing these challenges we were able to reduce build times by ~90%, which alone dropped our DORA metrics for Lead time for changes and Time to restore by ~50%. With the rollback strategy and telemetry changes, we were able to reduce our Mean time to Detect (MTTD) and Mean time to resolve (MTTR) by ~30%. We believe that it could get to 50-60% with tuning of alerts and the addition of runbooks (automated and manual).
- Enhanced Development Efficiency: Consistent environments across development, testing, and production stages sped up the development process, and roughly 90% faster build times with the native architecture solution.
- Reliable Rollbacks: Quick and efficient rollbacks minimized downtime and maintained system integrity.
- Comprehensive Telemetry: Sidecar containers enabled detailed monitoring of system health and security without impacting application performance, and was baked right into the infrastructure developers were deploying. Auto-instrumentation of the application code was simplified drastically with the adoption of our Dockerfiles.
Siimpl: Transforming Enterprises with Cloud-First Solutions
With Docker at the core, Siimpl.io’s solutions demonstrate how teams can build faster, more reliable, and scalable systems. Whether you’re optimizing CI/CD pipelines, enhancing telemetry, or ensuring secure rollbacks, Docker provides the foundation for success. Try Docker today to unlock new levels of developer productivity and operational efficiency.
Learn more from our website or contact us at solutions@siimpl.io
-
Docker
- Mastering Docker and Jenkins: Build Robust CI/CD Pipelines EfficientlyHey there, fellow engineers and tech enthusiasts! I’m excited to share one of my favorite strategies for modern software delivery: combining Docker and Jenkins to power up your CI/CD pipelines. Throughout my career as a Senior DevOps Engineer and Docker Captain, I’ve found that these two tools can drastically streamline releases, reduce environment-related headaches, and give teams the confidence they need to ship faster. In this post, I’ll walk you through what Docker and Jenkins are, wh
Mastering Docker and Jenkins: Build Robust CI/CD Pipelines Efficiently
Hey there, fellow engineers and tech enthusiasts! I’m excited to share one of my favorite strategies for modern software delivery: combining Docker and Jenkins to power up your CI/CD pipelines.
Throughout my career as a Senior DevOps Engineer and Docker Captain, I’ve found that these two tools can drastically streamline releases, reduce environment-related headaches, and give teams the confidence they need to ship faster.
In this post, I’ll walk you through what Docker and Jenkins are, why they pair perfectly, and how you can build and maintain efficient pipelines. My goal is to help you feel right at home when automating your workflows. Let’s dive in.

Brief overview of continuous integration and continuous delivery
Continuous integration (CI) and continuous delivery (CD) are key pillars of modern development. If you’re new to these concepts, here’s a quick rundown:
- Continuous integration (CI): Developers frequently commit their code to a shared repository, triggering automated builds and tests. This practice prevents conflicts and ensures defects are caught early.
- Continuous delivery (CD): With CI in place, organizations can then confidently automate releases. That means shorter release cycles, fewer surprises, and the ability to roll back changes quickly if needed.
Leveraging CI/CD can dramatically improve your team’s velocity and quality. Once you experience the benefits of dependable, streamlined pipelines, there’s no going back.
Why combine Docker and Jenkins for CI/CD?
Docker allows you to containerize your applications, creating consistent environments across development, testing, and production. Jenkins, on the other hand, helps you automate tasks such as building, testing, and deploying your code. I like to think of Jenkins as the tireless “assembly line worker,” while Docker provides identical “containers” to ensure consistency throughout your project’s life cycle.
Here’s why blending these tools is so powerful:
- Consistent environments: Docker containers guarantee uniformity from a developer’s laptop all the way to production. This consistency reduces errors and eliminates the dreaded “works on my machine” excuse.
- Speedy deployments and rollbacks: Docker images are lightweight. You can ship or revert changes at the drop of a hat — perfect for short delivery process cycles where minimal downtime is crucial.
- Scalability: Need to run 1,000 tests in parallel or support multiple teams working on microservices? No problem. Spin up multiple Docker containers whenever you need more build agents, and let Jenkins orchestrate everything with Jenkins pipelines.
For a DevOps junkie like me, this synergy between Jenkins and Docker is a dream come true.
Setting up your CI/CD pipeline with Docker and Jenkins
Before you roll up your sleeves, let’s cover the essentials you’ll need:
- Docker Desktop (or a Docker server environment) installed and running. You can get Docker for various operating systems.
- Jenkins downloaded from Docker Hub or installed on your machine. These days, you’ll want
jenkins/jenkins:lts
(the long-term support image) rather than the deprecatedlibrary/jenkins
image. - Proper permissions for Docker commands and the ability to manage Docker images on your system.
- A GitHub or similar code repository where you can store your Jenkins pipeline configuration (optional, but recommended).
Pro tip: If you’re planning a production setup, consider a container orchestration platform like Kubernetes. This approach simplifies scaling Jenkins, updating Jenkins, and managing additional Docker servers for heavier workloads.
Building a robust CI/CD pipeline with Docker and Jenkins
After prepping your environment, it’s time to create your first Jenkins-Docker pipeline. Below, I’ll walk you through common steps for a typical pipeline — feel free to modify them to fit your stack.
1. Install necessary Jenkins plugins
Jenkins offers countless plugins, so let’s start with a few that make configuring Jenkins with Docker easier:
- Docker Pipeline Plugin
- Docker
- CloudBees Docker Build and Publish
How to install plugins:
- Open Manage Jenkins > Manage Plugins in Jenkins.
- Click the Available tab and search for the plugins listed above.
- Install them (and restart Jenkins if needed).
Code example (plugin installation via CLI):
# Install plugins using Jenkins CLI java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker-pipeline java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker-build-publish
Pro tip (advanced approach): If you’re aiming for a fully infrastructure-as-code setup, consider using Jenkins configuration as code (JCasC). With JCasC, you can declare all your Jenkins settings — including plugins, credentials, and pipeline definitions — in a YAML file. This means your entire Jenkins configuration is version-controlled and reproducible, making it effortless to spin up fresh Jenkins instances or apply consistent settings across multiple environments. It’s especially handy for large teams looking to manage Jenkins at scale.
Reference:
2. Set up your Jenkins pipeline
In this step, you’ll define your pipeline. A Jenkins “pipeline” job uses a Jenkinsfile
(stored in your code repository) to specify the steps, stages, and environment requirements.
Example Jenkinsfile:
pipeline { agent any stages { stage('Checkout') { steps { git branch: 'main', url: 'https://github.com/your-org/your-repo.git' } } stage('Build') { steps { script { dockerImage = docker.build("your-org/your-app:${env.BUILD_NUMBER}") } } } stage('Test') { steps { sh 'docker run --rm your-org/your-app:${env.BUILD_NUMBER} ./run-tests.sh' } } stage('Push') { steps { script { docker.withRegistry('https://index.docker.io/v1/', 'dockerhub-credentials') { dockerImage.push() } } } } } }
Let’s look at what’s happening here:
- Checkout: Pulls your repository.
- Build: Creates a built docker image (
your-org/your-app
) with the build number as a tag. - Test: Runs your test suite inside a fresh container, ensuring Docker containers create consistent environments for every test run.
- Push: Pushes the image to your Docker registry (e.g., Docker Hub) if the tests pass.
Reference: Jenkins Pipeline Documentation.
3. Configure Jenkins for automated builds
Now that your pipeline is set up, you’ll want Jenkins to run it automatically:
- Webhook triggers: Configure your source control (e.g., GitHub) to send a webhook whenever code is pushed. Jenkins will kick off a build immediately.
- Poll SCM: Jenkins periodically checks your repo for new commits and starts a build if it detects changes.
Which trigger method should you choose?
- Webhook triggers are ideal if you want near real-time builds. As soon as you push to your repo, Jenkins is notified, and a new build starts almost instantly. This approach is typically more efficient, as Jenkins doesn’t have to continuously check your repository for updates. However, it requires that your source control system and network environment support webhooks.
- Poll SCM is useful if your environment can’t support incoming webhooks — for example, if you’re behind a corporate firewall or your repository isn’t configured for outbound hooks. In that case, Jenkins routinely checks for new commits on a schedule you define (e.g., every five minutes), which can add a small delay and extra overhead but may simplify setup in locked-down environments.
Personal experience: I love webhook triggers because they keep everything as close to real-time as possible. Polling works fine if webhooks aren’t feasible, but you’ll see a slight delay between code pushes and build starts. It can also generate extra network traffic if your polling interval is too frequent.
4. Build, test, and deploy with Docker containers
Here comes the fun part — automating the entire cycle from build to deploy:
- Build Docker image: After pulling the code, Jenkins calls
docker.build
to create a new image. - Run tests: Automated or automated acceptance testing runs inside a container spun up from that image, ensuring consistency.
- Push to registry: Assuming tests pass, Jenkins pushes the tagged image to your Docker registry — this could be Docker Hub or a private registry.
- Deploy: Optionally, Jenkins can then deploy the image to a remote server or a container orchestrator (Kubernetes, etc.).
This streamlined approach ensures every step — build, test, deploy — lives in one cohesive pipeline, preventing those “where’d that step go?” mysteries.
5. Optimize and maintain your pipeline
Once your pipeline is up and running, here are a few maintenance tips and enhancements to keep everything running smoothly:
- Clean up images: Routine cleanup of Docker images can reclaim space and reduce clutter.
- Security updates: Stay on top of updates for Docker, Jenkins, and any plugins. Applying patches promptly helps protect your CI/CD environment from vulnerabilities.
- Resource monitoring: Ensure Jenkins nodes have enough memory, CPU, and disk space for builds. Overworked nodes can slow down your pipeline and cause intermittent failures.
Pro tip: In large projects, consider separating your build agents from your Jenkins controller by running them in ephemeral Docker containers (also known as Jenkins agents). If an agent goes down or becomes stale, you can quickly spin up a fresh one — ensuring a clean, consistent environment for every build and reducing the load on your main Jenkins server.
Why use Declarative Pipelines for CI/CD?
Although Jenkins supports multiple pipeline syntaxes, Declarative Pipelines stand out for their clarity and resource-friendly design. Here’s why:
- Simplified, opinionated syntax: Everything is wrapped in a single
pipeline { ... }
block, which minimizes “scripting sprawl.” It’s perfect for teams who want a quick path to best practices without diving deeply into Groovy specifics. - Easier resource allocation: By specifying an
agent
at either the pipeline level or within each stage, you can offload heavyweight tasks (builds, tests) onto separate worker nodes or Docker containers. This approach helps prevent your main Jenkins controller from becoming overloaded. - Parallelization and matrix builds: If you need to run multiple test suites or support various OS/browser combinations, Declarative Pipelines make it straightforward to define parallel stages or set up a matrix build. This tactic is incredibly handy for microservices or large test suites requiring different environments in parallel.
- Built-in “escape hatch”: Need advanced Groovy features? Just drop into a
script
block. This lets you access Scripted Pipeline capabilities for niche cases, while still enjoying Declarative’s streamlined structure most of the time. - Cleaner parameterization: Want to let users pick which tests to run or which Docker image to use? The
parameters
directive makes your pipeline more flexible. A single Jenkinsfile can handle multiple scenarios — like unit vs. integration testing — without duplicating stages.
Declarative Pipeline examples
Below are sample pipelines to illustrate how declarative syntax can simplify resource allocation and keep your Jenkins controller healthy.
Example 1: Basic Declarative Pipeline
pipeline { agent any stages { stage('Build') { steps { echo 'Building...' } } stage('Test') { steps { echo 'Testing...' } } } }
- Runs on any available Jenkins agent (worker).
- Uses two stages in a simple sequence.
Example 2: Stage-level agents for resource isolation
pipeline { agent none // Avoid using a global agent at the pipeline level stages { stage('Build') { agent { docker 'maven:3.9.3-eclipse-temurin-17' } steps { sh 'mvn clean package' } } stage('Test') { agent { docker 'openjdk:17-jdk' } steps { sh 'java -jar target/my-app-tests.jar' } } } }
- Each stage runs in its own container, preventing any single node from being overwhelmed.
agent none
at the top ensures no global agent is allocated unnecessarily.
Example 3: Parallelizing test stages
pipeline { agent none stages { stage('Test') { parallel { stage('Unit Tests') { agent { label 'linux-node' } steps { sh './run-unit-tests.sh' } } stage('Integration Tests') { agent { label 'linux-node' } steps { sh './run-integration-tests.sh' } } } } } }
- Splits tests into two parallel stages.
- Each stage can run on a different node or container, speeding up feedback loops.
Example 4: Parameterized pipeline
pipeline { agent any parameters { choice(name: 'TEST_TYPE', choices: ['unit', 'integration', 'all'], description: 'Which test suite to run?') } stages { stage('Build') { steps { echo 'Building...' } } stage('Test') { when { expression { return params.TEST_TYPE == 'unit' || params.TEST_TYPE == 'all' } } steps { echo 'Running unit tests...' } } stage('Integration') { when { expression { return params.TEST_TYPE == 'integration' || params.TEST_TYPE == 'all' } } steps { echo 'Running integration tests...' } } } }
- Lets you choose which tests to run (unit, integration, or both).
- Only executes relevant stages based on the chosen parameter, saving resources.
Example 5: Matrix builds
pipeline { agent none stages { stage('Build and Test Matrix') { matrix { agent { label "${PLATFORM}-docker" } axes { axis { name 'PLATFORM' values 'linux', 'windows' } axis { name 'BROWSER' values 'chrome', 'firefox' } } stages { stage('Build') { steps { echo "Build on ${PLATFORM} with ${BROWSER}" } } stage('Test') { steps { echo "Test on ${PLATFORM} with ${BROWSER}" } } } } } } }
- Defines a matrix of PLATFORM x BROWSER, running each combination in parallel.
- Perfect for testing multiple OS/browser combinations without duplicating pipeline logic.
Additional resources:
- Jenkins Pipeline Syntax: Official reference for sections, directives, and advanced features like matrix, parallel, and post conditions.
- Jenkins Pipeline Steps Reference: Comprehensive list of steps you can call in your Jenkinsfile.
- Jenkins Configuration as Code Plugin (JCasC): Ideal for version-controlling your Jenkins configuration, including plugin installations and credentials.
Using Declarative Pipelines helps ensure your CI/CD setup is easier to maintain, scalable, and secure. By properly configuring agents — whether Docker-based or label-based — you can spread workloads across multiple worker nodes, minimize resource contention, and keep your Jenkins controller humming along happily.
Best practices for CI/CD with Docker and Jenkins
Ready to supercharge your setup? Here are a few tried-and-true habits I’ve cultivated:
- Leverage Docker’s layer caching: Optimize your Dockerfiles so stable (less frequently changing) layers appear early. This drastically reduces build times.
- Run tests in parallel: Jenkins can run multiple containers for different services or microservices, letting you test them side by side. Declarative Pipelines make it easy to define parallel stages, each on its own agent.
- Shift left on security: Integrate security checks early in the pipeline. Tools like Docker Scout let you scan images for vulnerabilities, while Jenkins plugins can enforce compliance policies. Don’t wait until production to discover issues.
- Optimize resource allocation: Properly configure CPU and memory limits for Jenkins and Docker containers to avoid resource hogging. If you’re scaling Jenkins, distribute builds across multiple worker nodes or ephemeral agents for maximum efficiency.
- Configuration management: Store Jenkins jobs, pipeline definitions, and plugin configurations in source control. Tools like Jenkins Configuration as Code simplify versioning and replicating your setup across multiple Docker servers.
With these strategies — plus a healthy dose of Declarative Pipelines — you’ll have a lean, high-octane CI/CD pipeline that’s easier to maintain and evolve.
Troubleshooting Docker and Jenkins Pipelines
Even the best systems hit a snag now and then. Here are a few hurdles I’ve seen (and conquered):
- Handling environment variability: Keep Docker and Jenkins versions synced across different nodes. If multiple Jenkins nodes are in play, standardize Docker versions to avoid random build failures.
- Troubleshooting build failures: Use
docker logs -f <container-id>
to see exactly what happened inside a container. Often, the logs reveal missing dependencies or misconfigured environment variables. - Networking challenges: If your containers need to talk to each other — especially across multiple hosts — make sure you configure Docker networks or an orchestration platform properly. Read Docker’s networking documentation for details, and check out the Jenkins diagnosing issues guide for more troubleshooting tips.
Conclusion
Pairing Docker and Jenkins offers a nimble, robust approach to CI/CD. Docker locks down consistent environments and lightning-fast rollouts, while Jenkins automates key tasks like building, testing, and pushing your changes to production. When these two are in harmony, you can expect shorter release cycles, fewer integration headaches, and more time to focus on developing awesome features.
A healthy pipeline also means your team can respond quickly to user feedback and confidently roll out updates — two crucial ingredients for any successful software project. And if you’re concerned about security, there are plenty of tools and best practices to keep your applications safe.
I hope this guide helps you build (and maintain) a high-octane CI/CD pipeline that your team will love. If you have questions or need a hand, feel free to reach out on the community forums, join the conversation on Slack, or open a ticket on GitHub issues. You’ll find plenty of fellow Docker and Jenkins enthusiasts who are happy to help.
Thanks for reading, and happy building!
Learn more
- Subscribe to the Docker Newsletter.
- Take a deeper dive in Docker’s official Jenkins integration documentation.
- Explore Docker Business for comprehensive CI/CD security at scale.
- Get the latest release of Docker Desktop.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
-
Collabnix
- What is Continuous Integration and Continuous Deployment (CI/CD)?: Explained in 5 minutesThe software market is moving at a really fast pace and is quite unforgiving; this means that for any software company to survive, getting good code out quickly is really important. That’s where Continuous Integration (CI) and Continuous Deployment (CD) come in. CI/CD is now a key part of how modern development works, helping teams […]
What is Continuous Integration and Continuous Deployment (CI/CD)?: Explained in 5 minutes
-
Docker
- Revolutionize Your CI/CD Pipeline: Integrating Testcontainers and BazelOne of the challenges in modern software development is being able to release software often and with confidence. This can only be achieved when you have a good CI/CD setup in place that can test your software and release it with minimal or even no human intervention. But modern software applications also use a wide range of third-party dependencies and often need to run on multiple operating systems and architectures. In this post, I will explain how the combination of Bazel and Testcontain
Revolutionize Your CI/CD Pipeline: Integrating Testcontainers and Bazel
One of the challenges in modern software development is being able to release software often and with confidence. This can only be achieved when you have a good CI/CD setup in place that can test your software and release it with minimal or even no human intervention. But modern software applications also use a wide range of third-party dependencies and often need to run on multiple operating systems and architectures.
In this post, I will explain how the combination of Bazel and Testcontainers helps developers build and release software by providing a hermetic build system.

Using Bazel and Testcontainers together
Bazel is an open source build tool developed by Google to build and test multi-language, multi-platform projects. Several big IT companies have adopted monorepos for various reasons, such as:
- Code sharing and reusability
- Cross-project refactoring
- Consistent builds and dependency management
- Versioning and release management
With its multi-language support and focus on reproducible builds, Bazel shines in building such monorepos.
A key concept of Bazel is hermeticity, which means that when all inputs are declared, the build system can know when an output needs to be rebuilt. This approach brings determinism where, given the same input source code and product configuration, it will always return the same output by isolating the build from changes to the host system.
Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers make it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.
Using Bazel and Testcontainers together offers the following features:
- Bazel can build projects using different programming languages like C, C++, Java, Go, Python, Node.js, etc.
- Bazel can dynamically provision the isolated build/test environment with desired language versions.
- Testcontainers can provision the required dependencies as Docker containers so that your test suite is self-contained. You don’t have to manually pre-provision the necessary services, such as databases, message brokers, and so on.
- All the test dependencies can be expressed through code using Testcontainers APIs, and you avoid the risk of breaking hermeticity by sharing such resources between tests.
Let’s see how we can use Bazel and Testcontainers to build and test a monorepo with modules using different languages.
We are going to explore a monorepo with a customers
module, which uses Java, and a products
module, which uses Go. Both modules interact with relational databases (PostgreSQL) and use Testcontainers for testing.
Getting started with Bazel
To begin, let’s get familiar with Bazel’s basic concepts. The best way to install Bazel is by using Bazelisk. Follow the official installation instructions to install Bazelisk. Once it’s installed, you should be able to run the Bazelisk version and Bazel version commands:
$ brew install bazelisk $ bazel version Bazelisk version: v1.12.0 Build label: 7.0.0
Before you can build a project using Bazel, you need to set up its workspace.
A workspace is a directory that holds your project’s source files and contains the following files:
- The
WORKSPACE.bazel
file, which identifies the directory and its contents as a Bazel workspace and lives at the root of the project’s directory structure. - A
MODULE.bazel
file, which declares dependencies on Bazel plugins (called “rulesets”). - One or more
BUILD
(orBUILD.bazel
) files, which describe the sources and dependencies for different parts of the project. A directory within the workspace that contains aBUILD
file is a package.
In the simplest case, a MODULE.bazel
file can be an empty file, and a BUILD
file can contain one or more generic targets as follows:
genrule( name = "foo", outs = ["foo.txt"], cmd_bash = "sleep 2 && echo 'Hello World' >$@", ) genrule( name = "bar", outs = ["bar.txt"], cmd_bash = "sleep 2 && echo 'Bye bye' >$@", )
Here, we have two targets: foo
and bar
. Now we can build those targets using Bazel as follows:
$ bazel build //:foo <- runs only foo target, // indicates root workspace $ bazel build //:bar <- runs only bar target $ bazel build //... <- runs all targets
Configuring the Bazel build in a monorepo
We are going to explore using Bazel in the testcontainers-bazel-demo repository. This repository is a monorepo with a customers
module using Java and a products
module using Go. Its structure looks like the following:
testcontainers-bazel-demo |____customers | |____BUILD.bazel | |____src |____products | |____go.mod | |____go.sum | |____repo.go | |____repo_test.go | |____BUILD.bazel |____MODULE.bazel
Bazel uses different rules for building different types of projects. Bazel uses rules_java
for building Java packages, rules_go
for building Go packages, rules_python
for building Python packages, etc.
We may also need to load additional rules providing additional features. For building Java packages, we may want to use external Maven dependencies and use JUnit 5 for running tests. In that case, we should load rules_jvm_external
to be able to use Maven dependencies.
We are going to use Bzlmod, the new external dependency subsystem, to load the external dependencies. In the MODULE.bazel
file, we can load the additional rules_jvm_external
and contrib_rules_jvm
as follows:
bazel_dep(name = "contrib_rules_jvm", version = "0.21.4") bazel_dep(name = "rules_jvm_external", version = "5.3") maven = use_extension("@rules_jvm_external//:extensions.bzl", "maven") maven.install( name = "maven", artifacts = [ "org.postgresql:postgresql:42.6.0", "ch.qos.logback:logback-classic:1.4.6", "org.testcontainers:postgresql:1.19.3", "org.junit.platform:junit-platform-launcher:1.10.1", "org.junit.platform:junit-platform-reporting:1.10.1", "org.junit.jupiter:junit-jupiter-api:5.10.1", "org.junit.jupiter:junit-jupiter-params:5.10.1", "org.junit.jupiter:junit-jupiter-engine:5.10.1", ], ) use_repo(maven, "maven")
Let’s understand the above configuration in the MODULE.bazel
file:
- We have loaded the
rules_jvm_external
rules from Bazel Central Registry and loaded extensions to use third-party Maven dependencies. - We have configured all our Java application dependencies using Maven coordinates in the
maven.install
artifacts configuration. - We are loading the
contrib_rules_jvm
rules that supports running JUnit 5 tests as a suite.
Now, we can run the @maven//:pin
program to create a JSON lockfile of the transitive dependencies, in a format that rules_jvm_external
can use later:
bazel run @maven//:pin
Rename the generated file rules_jvm_external~4.5~maven~maven_install.json
to maven_install.json
. Now update the MODULE.bazel
to reflect that we pinned the dependencies.
Add a lock_file
attribute to the maven.install()
and update the use_repo
call to also expose the unpinned_maven
repository used to update the dependencies:
maven.install( ... lock_file = "//:maven_install.json", ) use_repo(maven, "maven", "unpinned_maven")
Now, when you update any dependencies, you can run the following command to update the lock file:
bazel run @unpinned_maven//:pin
Let’s configure our build targets in the customers/BUILD.bazel
file, as follows:
load( "@bazel_tools//tools/jdk:default_java_toolchain.bzl", "default_java_toolchain", "DEFAULT_TOOLCHAIN_CONFIGURATION", "BASE_JDK9_JVM_OPTS", "DEFAULT_JAVACOPTS" ) default_java_toolchain( name = "repository_default_toolchain", configuration = DEFAULT_TOOLCHAIN_CONFIGURATION, java_runtime = "@bazel_tools//tools/jdk:remotejdk_17", jvm_opts = BASE_JDK9_JVM_OPTS + ["--enable-preview"], javacopts = DEFAULT_JAVACOPTS + ["--enable-preview"], source_version = "17", target_version = "17", ) load("@rules_jvm_external//:defs.bzl", "artifact") load("@contrib_rules_jvm//java:defs.bzl", "JUNIT5_DEPS", "java_test_suite") java_library( name = "customers-lib", srcs = glob(["src/main/java/**/*.java"]), deps = [ artifact("org.postgresql:postgresql"), artifact("ch.qos.logback:logback-classic"), ], ) java_library( name = "customers-test-resources", resources = glob(["src/test/resources/**/*"]), ) java_test_suite( name = "customers-lib-tests", srcs = glob(["src/test/java/**/*.java"]), runner = "junit5", test_suffixes = [ "Test.java", "Tests.java", ], runtime_deps = JUNIT5_DEPS, deps = [ ":customers-lib", ":customers-test-resources", artifact("org.junit.jupiter:junit-jupiter-api"), artifact("org.junit.jupiter:junit-jupiter-params"), artifact("org.testcontainers:postgresql"), ], )
Let’s understand this BUILD
configuration:
- We have loaded
default_java_toolchain
and then configured the Java version to 17. - We have configured a
java_library
target with the namecustomers-lib
that will build the production jar file. - We have defined a
java_test_suite
target with the namecustomers-lib-tests
to define our test suite, which will execute all the tests. We also configured the dependencies on the other targetcustomers-lib
and external dependencies. - We also defined another target with the name
customers-test-resources
to add non-Java sources (e.g., logging config files) to our test suite target as a dependency.
In the customers
package, we have a CustomerService
class that stores and retrieves customer details in a PostgreSQL database. And we have CustomerServiceTest
that tests CustomerService
methods using Testcontainers
. Take a look at the GitHub repository for the complete code.
Note: You can use Gazelle, which is a Bazel build file generator, to generate the BUILD.bazel
files instead of manually writing them.
Running Testcontainers tests
For running Testcontainers tests, we need a Testcontainers-supported container runtime. Let’s assume you have a local Docker installed using Docker Desktop.
Now, with our Bazel build configuration, we are ready to build and test the customers
package:
# to run all build targets of customers package $ bazel build //customers/... # to run a specific build target of customers package $ bazel build //customers:customers-lib # to run all test targets of customers package $ bazel test //customers/... # to run a specific test target of customers package $ bazel test //customers:customers-lib-tests
When you run the build for the first time, it will take time to download the required dependencies and then execute the targets. But, if you try to build or test again without any code or configuration changes, Bazel will not re-run the build/test again and will show the cached result. Bazel has a powerful caching mechanism that will detect code changes and run only the targets that are necessary to run.
While using Testcontainers, you define the required dependencies as part of code using Docker image names along with tags, such as Postgres:16. So, unless you change the code (e.g., Docker image name or tag), Bazel will cache the test results.
Similarly, we can use rules_go
and Gazelle for configuring Bazel build for Go packages. Take a look at the MODULE.bazel
and products/BUILD.bazel
files to learn more about configuring Bazel in a Go package.
As mentioned earlier, we need a Testcontainers-supported container runtime for running Testcontainers tests. Installing Docker on complex CI platforms might be challenging, and you might need to use a complex Docker-in-Docker setup. Additionally, some Docker images might not be compatible with the operating system architecture (e.g., Apple M1).
Testcontainers Cloud solves these problems by eliminating the need to have Docker on the localhost or CI runners and run the containers on cloud VMs transparently.
Here is an example of running the Testcontainers tests using Bazel on Testcontainers Cloud using GitHub Actions:
name: CI on: push: branches: - '**' jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Configure TestContainers cloud uses: atomicjar/testcontainers-cloud-setup-action@main with: wait: true token: ${{ secrets.TC_CLOUD_TOKEN }} - name: Cache Bazel uses: actions/cache@v3 with: path: | ~/.cache/bazel key: ${{ runner.os }}-bazel-${{ hashFiles('.bazelversion', '.bazelrc', 'WORKSPACE', 'WORKSPACE.bazel', 'MODULE.bazel') }} restore-keys: | ${{ runner.os }}-bazel- - name: Build and Test run: bazel test --test_output=all //...
GitHub Actions runners already come with Bazelisk installed, so we can use Bazel out of the box. We have configured the TC_CLOUD_TOKEN
environment variable through Secrets and started the Testcontainers Cloud agent. If you check the build logs, you can see that the tests are executed using Testcontainers Cloud.
Summary
We have shown how to use the Bazel build system to build and test monorepos with multiple modules using different programming languages. Combined with Testcontainers, you can make the builds self-contained and hermetic.
Although Bazel and Testcontainers help us have a self-contained build, we need to take extra measures to make it a hermetic build:
- Bazel can be configured to use a specific version of SDK, such as JDK 17, Go 1.20, etc., so that builds always use the same version instead of what is installed on the host machine.
- For Testcontainers tests, using Docker tag latest for container dependencies may result in non-deterministic behavior. Also, some Docker image publishers override the existing images using the same tag. To make the build/test deterministic, always use the Docker image digest so that the builds and tests always use the exact same version of images that gives reproducible and hermetic builds.
- Using Testcontainers Cloud for running Testcontainers tests reduces the complexity of Docker setup and gives a deterministic container runtime environment.
Visit the Testcontainers website to learn more, and get started with Testcontainers Cloud by creating a free account.
Learn more
- Visit the Testcontainers website.
- Get started with Testcontainers Cloud by creating a free account.
- Get the latest release of Docker Desktop.
- Vote on what’s next! Check out our public roadmap.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
-
Docker
- How to Use Testcontainers on Jenkins CIReleasing software often and with confidence relies on a strong continuous integration and continuous delivery (CI/CD) process that includes the ability to automate tests. Jenkins offers an open source automation server that facilitates such release of software projects. In this article, we will explore how you can run tests based on the open source Testcontainers framework in a Jenkins pipeline using Docker and Testcontainers Cloud. Jenkins, which streamlines the development process
How to Use Testcontainers on Jenkins CI
Releasing software often and with confidence relies on a strong continuous integration and continuous delivery (CI/CD) process that includes the ability to automate tests. Jenkins offers an open source automation server that facilitates such release of software projects.
In this article, we will explore how you can run tests based on the open source Testcontainers framework in a Jenkins pipeline using Docker and Testcontainers Cloud.

Jenkins, which streamlines the development process by automating the building, testing, and deployment of code changes, is widely adopted in the DevOps ecosystem. It supports a vast array of plugins, enabling integration with various tools and technologies, making it highly customizable to meet specific project requirements.
Testcontainers is an open source framework for provisioning throwaway, on-demand containers for development and testing use cases. Testcontainers makes it easy to work with databases, message brokers, web browsers, or just about anything that can run in a Docker container.
Testcontainers also provides support for many popular programming languages, including Java, Go, .NET, Node.js, Python, and more. This article will show how to test a Java Spring Boot application (testcontainers-showcase) using Testcontainers in a Jenkins pipeline. Please fork the repository into your GitHub account. To run Testcontainers-based tests, a Testcontainers-supported container runtime, like Docker, needs to be available to agents.
Note: As Jenkins CI servers are mostly run on Linux machines, the following configurations are tested on a Linux machine only.
Docker containers as Jenkins agents
Let’s see how to use dynamic Docker container-based agents. To be able to use Docker containers as agents, install the Docker Pipeline plugin.
Now, let’s create a file with name Jenkinsfile
in the root of the project with the following content:
pipeline { agent { docker { image 'eclipse-temurin:17.0.9_9-jdk-jammy' args '--network host -u root -v /var/run/docker.sock:/var/run/docker.sock' } } triggers { pollSCM 'H/2 * * * *' } // poll every 2 mins stages { stage('Build and Test') { steps { sh './mvnw verify' } } } }
We are using the eclipse-temurin:17.0.9_9-jdk-jammy
Docker container as an agent to run the builds for this pipeline. Note that we are mapping the host’s Unix Docker socket as a volume with root user permissions to make it accessible to the agent, but this can potentially be a security risk.
Add the Jenkinsfile
and push the changes to the Git repository.
Now, go to the Jenkins Dashboard and select New Item to create the pipeline. Follow these steps:
- Enter testcontainers-showcase as pipeline name.
- Select Pipeline as job type.
- Select OK.
- Under Pipeline section:
- Select Definition: Pipeline script from SCM.
- SCM: Git.
- Repository URL: https://github.com/YOUR_GITHUB_USERNAME/testcontainers-showcase.git. Replace YOUR_GITHUB_USERNAME with your actual GitHub username.
- Branches to build: Branch Specifier (blank for ‘any’):
*/main
. - Script Path: Jenkinsfile.
- Select Save.
- Choose Build Now to trigger the pipeline for the first time.
The pipeline should run the Testcontainers-based tests successfully in a container-based agent using the remote Docker-in-Docker based configuration.
Kubernetes pods as Jenkins agents
While running Testcontainers-based tests on Kubernetes pods, you can run a Docker-in-Docker (DinD) container as a sidecar. To use Kubernetes pods as Jenkins agents, install Kubernetes plugin.
Now you can create the Jenkins pipeline using Kubernetes pods as agents as follows:
def pod = """ apiVersion: v1 kind: Pod metadata: labels: name: worker spec: serviceAccountName: jenkins containers: - name: java17 image: eclipse-temurin:17.0.9_9-jdk-jammy resources: requests: cpu: "1000m" memory: "2048Mi" imagePullPolicy: Always tty: true command: ["cat"] - name: dind image: docker:dind imagePullPolicy: Always tty: true env: - name: DOCKER_TLS_CERTDIR value: "" securityContext: privileged: true """ pipeline { agent { kubernetes { yaml pod } } environment { DOCKER_HOST = 'tcp://localhost:2375' DOCKER_TLS_VERIFY = 0 } stages { stage('Build and Test') { steps { container('java17') { script { sh "./mvnw verify" } } } } } }
Although we can use a Docker-in-Docker based configuration to make the Docker environment available to the agent, this setup also brings configuration complexities and security risks.
- By volume mounting the host’s Docker Unix socket (Docker-out-of-Docker) with the agents, the agents have direct access to the host Docker engine.
- When using DooD approach file sharing, using bind-mounting doesn’t work because the containerized app and Docker engine work in different contexts.
- The Docker-in-Docker (DinD) approach requires the use of insecure privileged containers.
You can watch the Docker-in-Docker: Containerized CI Workflows presentation to learn more about the challenges of a Docker-in-Docker based CI setup.
This is where Testcontainers Cloud comes into the picture to make it easy to run Testcontainers-based tests more simply and reliably.
By using Testcontainers Cloud, you don’t even need a Docker daemon running on the agent. Containers will be run in on-demand cloud environments so that you don’t need to use powerful CI agents with high CPU/memory for your builds.
Let’s see how to use Testcontainers Cloud with minimal setup and run Testcontainers-based tests.
Testcontainers Cloud-based setup
Testcontainers Cloud helps you run Testcontainers-based tests at scale by spinning up the dependent services as Docker containers on the cloud and having your tests connect to those services.
If you don’t have a Testcontainers Cloud account already, you can create an account and get a Service Account Token as follows:
- Sign up for a Testcontainers Cloud account.
- Once logged in, create an organization.
- Navigate to the Testcontainers Cloud dashboard and generate a Service account (Figure 1).

To use Testcontainers Cloud, we need to start a lightweight testcontainers-cloud agent by passing TC_CLOUD_TOKEN
as an environment variable.
You can store the TC_CLOUD_TOKEN
value as a secret in Jenkins as follows:
- From the Dashboard, select Manage Jenkins.
- Under Security, choose Credentials.
- You can create a new domain or use System domain.
- Under Global credentials, select Add credentials.
- Select Kind as Secret text.
- Enter
TC_CLOUD_TOKEN
value in Secret. - Enter
tc-cloud-token-secret-id
as ID. - Select Create.
Next, you can update the Jenkinsfile
as follows:
pipeline { agent { docker { image 'eclipse-temurin:17.0.9_9-jdk-jammy' } } triggers { pollSCM 'H/2 * * * *' } stages { stage('TCC SetUp') { environment { TC_CLOUD_TOKEN = credentials('tc-cloud-token-secret-id') } steps { sh "curl -fsSL https://get.testcontainers.cloud/bash | sh" } } stage('Build and Test') { steps { sh './mvnw verify' } } } }
We have set the TC_CLOUD_TOKEN
environment variable using the value from tc-cloud-token-secret-id
credential we created and started a Testcontainers Cloud agent before running our tests.
Now if you commit and push the updated Jenkinsfile
, then the pipeline will run the tests using Testcontainers Cloud. You should see log statements similar to the following indicating that the Testcontainers-based tests are using Testcontainers Cloud instead of the default Docker daemon.
14:45:25.748 [testcontainers-lifecycle-0] INFO org.testcontainers.DockerClientFactory - Connected to docker: Server Version: 78+testcontainerscloud (via Testcontainers Desktop 1.5.5) API Version: 1.43 Operating System: Ubuntu 20.04 LTS Total Memory: 7407 MB
You can also leverage Testcontainers Cloud’s Turbo mode in conjunction with build tools that feature parallel run capabilities to run tests even faster.
In the case of Maven, you can use the -DforkCount=N
system property to specify the degree of parallelization. For Gradle, you can specify the degree of parallelization using the maxParallelForks
property.
We can enable parallel execution of our tests using four forks in Jenkinsfile
as follows:
stage('Build and Test') { steps { sh './mvnw verify -DforkCount=4' } }
For more information, check out the article on parallelizing your tests with Turbo mode.
Conclusion
In this article, we have explored how to run Testcontainers-based tests on Jenkins CI using dynamic containers and Kubernetes pods as agents with Docker-out-of-Docker and Docker-in-Docker based configuration.
Then we learned how to create a Testcontainers Cloud account and configure the pipeline to run tests using Testcontainers Cloud. We also explored leveraging Testcontainers Cloud Turbo mode combined with your build tool’s parallel execution capabilities.
Although we have demonstrated this setup using a Java project as an example, Testcontainers libraries exist for other popular languages, too, and you can follow the same pattern of configuration to run your Testcontainers-based tests on Jenkins CI in Golang, .NET, Python, Node.js, etc.
Get started with Testcontainers Cloud by creating a free account at the website.
Learn more
- Sign up for a Testcontainers Cloud account.
- Watch the Docker-in-Docker: Containerized CI Workflows session from DockerCon 2023.
- Subscribe to the Docker Newsletter.
- Get the latest release of Docker Desktop.
- Vote on what’s next! Check out our public roadmap.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
-
Technology Conversations
- The Best DevOps Tools, Platforms, and Services In 2024!As DevOps continues to evolve, the number of tools, platforms, and services available to practitioners is growing exponentially. With so many options, it can be difficult to know which ones are right for your team and your organization. In this video, we’ll take a look at some of the best DevOps tools, platforms, and services and choose which ones we should use in 2024. ▬▬▬▬▬▬ Additional Info ▬▬▬▬▬▬ Gist with the commands: manuscript/devops/devops-tools-2024.sh ▬▬▬▬▬▬ Spon
The Best DevOps Tools, Platforms, and Services In 2024!
As DevOps continues to evolve, the number of tools, platforms, and services available to practitioners is growing exponentially. With so many options, it can be difficult to know which ones are right for your team and your organization.
In this video, we’ll take a look at some of the best DevOps tools, platforms, and services and choose which ones we should use in 2024.
▬▬▬▬▬▬ Additional Info
▬▬▬▬▬▬
Gist with the commands: manuscript/devops/devops-tools-2024.sh
▬▬▬▬▬▬ Sponsorships
▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
▬▬▬▬▬▬ Livestreams & podcasts
▬▬▬▬▬▬
Podcast: https://www.devopsparadox.com/
Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ Contact me
▬▬▬▬▬▬
Follow me on Twitter: https://twitter.com/vfarcic
Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/
-
Docker
- Using Authenticated Logins for Docker Hub in Google CloudThe rise of open source software has led to more collaborative development, but it’s not without challenges. While public container images offer convenience and access to a vast library of prebuilt components, their lack of control and potential vulnerabilities can introduce security and reliability risks into your CI/CD pipeline. This blog post delves into best practices that your teams can implement to mitigate these risks and maintain a secure and reliable software delivery process. By fol
Using Authenticated Logins for Docker Hub in Google Cloud
The rise of open source software has led to more collaborative development, but it’s not without challenges. While public container images offer convenience and access to a vast library of prebuilt components, their lack of control and potential vulnerabilities can introduce security and reliability risks into your CI/CD pipeline.
This blog post delves into best practices that your teams can implement to mitigate these risks and maintain a secure and reliable software delivery process. By following these guidelines, you can leverage the benefits of open source software while safeguarding your development workflow.

1. Store local copies of public containers
To minimize risks and improve security and reliability, consider storing local copies of public container images whenever feasible. The Open Containers Initiative offers guidelines on consuming public content, which you can access for further information.
2. Use authentication when accessing Docker Hub
For secure and reliable CI/CD pipelines, authenticating with Docker Hub instead of using anonymous access is recommended. Anonymous access exposes you to security vulnerabilities and increases the risk of hitting rate limits, hindering your pipeline’s performance.
The specific authentication method depends on your CI/CD infrastructure and Google Cloud services used. Fortunately, several options are available to ensure secure and efficient interactions with Docker Hub.
3. Use Artifact Registry remote repositories
Instead of directly referencing Docker Hub repositories in your build processes, opt for Artifact Registry remote repositories for secure and efficient access. This approach leverages Docker Hub access tokens, minimizing the risk of vulnerabilities and facilitating a seamless workflow.
Detailed instructions on configuring this setup can be found in the following Artifact Registry documentation: Configure remote repository authentication to Docker Hub.

4. Use Google Cloud Build to interact with Docker images
Google Cloud Build offers robust authentication mechanisms to pull Docker Hub images seamlessly within your build steps. These mechanisms are essential if your container images rely on external dependencies hosted on Docker Hub. By implementing these features, you can ensure secure and reliable access to the necessary resources while streamlining your CI/CD pipeline.
Implementing the best practices outlined above offers significant benefits for your CI/CD pipelines. You’ll achieve a stronger security posture and reduced reliability risks, ensuring smooth and efficient software delivery. Additionally, establishing robust authentication controls for your development environments prevents potential roadblocks that could arise later in production. As a result, you can be confident that your processes comply with or surpass corporate security standards, further solidifying your development foundation.
Learn more
Visit the following product pages to learn more about the features that assist you in implementing these steps.
-
Technology Conversations
- Dagger: The Missing Ingredient for Your Disastrous CI/CD PipelineIn this video, we will take a look at some of the common mistakes that people make when building CI/CD pipelines, and we will show you how Dagger can help you to avoid these mistakes. Dagger is a set of libraries enable us to write CI (not CI/CD) pipelines in a variety of languages (NodeJS, Python, Elixir, etc.), that can run anywhere (locally, remotely, in other pipeline tools), and that is based on Docker or other container runtimes. It replaces many of the tasks we normally write in Jen
Dagger: The Missing Ingredient for Your Disastrous CI/CD Pipeline
In this video, we will take a look at some of the common mistakes that people make when building CI/CD pipelines, and we will show you how Dagger can help you to avoid these mistakes.
Dagger is a set of libraries enable us to write CI (not CI/CD) pipelines in a variety of languages (NodeJS, Python, Elixir, etc.), that can run anywhere (locally, remotely, in other pipeline tools), and that is based on Docker or other container runtimes.
It replaces many of the tasks we normally write in Jenkins, GitHub Actions, Argo Workflows, Tekton, CircleCI, and other remote pipeline solutions.
▬▬▬▬▬▬ Additional Info
▬▬▬▬▬▬
Gist with the commands: manuscript/pipelines/dagger.sh
Dagger: https://dagger.io
Your CI/CD Pipelines Are Wrong – From Monoliths To Events: https://youtu.be/TSQ0QpfCi1c
Is CUE The Perfect Language For Kubernetes Manifests (Helm Templates Replacement)?: https://youtu.be/m6g0aWggdUQ
Is Timoni With CUE a Helm Replacement?: https://youtu.be/bbE1BFCs548
▬▬▬▬▬▬ Sponsorships
▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
▬▬▬▬▬▬ Livestreams & podcasts
▬▬▬▬▬▬
Podcast: https://www.devopsparadox.com/
Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ Contact me
▬▬▬▬▬▬
Follow me on Twitter: https://twitter.com/vfarcic
Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/
-
Technology Conversations
- Deciding What to Run in KubernetesKubernetes is a powerful container orchestration platform that can be used to run a wide variety of applications. But what are the best types of workloads to run in Kubernetes? And how do you decide? In this video, we take a comprehensive look at what to run in Kubernetes. We will discuss the benefits of running different types of workloads in Kubernetes, as well as the challenges and considerations to keep in mind. ▬▬▬▬▬▬ Sponsoships ▬▬▬▬▬▬If you are interested in sponsoring
Deciding What to Run in Kubernetes
Kubernetes is a powerful container orchestration platform that can be used to run a wide variety of applications. But what are the best types of workloads to run in Kubernetes? And how do you decide?
In this video, we take a comprehensive look at what to run in Kubernetes. We will discuss the benefits of running different types of workloads in Kubernetes, as well as the challenges and considerations to keep in mind.
▬▬▬▬▬▬ Sponsoships
▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
▬▬▬▬▬▬ Livestreams & podcasts
▬▬▬▬▬▬
Podcast: https://www.devopsparadox.com/
Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ Contact me
▬▬▬▬▬▬
Follow me on Twitter: https://twitter.com/vfarcic
Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/
-
Technology Conversations
- Demystifying Kubernetes: Dive into Testing Techniques with KUTTLThis video delves into testing techniques with KUTTL, a testing tool for Kubernetes. Demystify Kubernetes by exploring how KUTTL simplifies testing processes, enhances reliability, and ensures seamless deployment of applications on Kubernetes clusters. ▬▬▬▬▬▬ Additional Info ▬▬▬▬▬▬ Gist with the commands: https://gist.github.com/vfarcic/d54940365e29111539e6744c50eae794 KUTTL: https://kuttl.dev How to run local multi-node Kubernetes clusters using kind: https://youtu.be/C0v5gJ
Demystifying Kubernetes: Dive into Testing Techniques with KUTTL
This video delves into testing techniques with KUTTL, a testing tool for Kubernetes. Demystify Kubernetes by exploring how KUTTL simplifies testing processes, enhances reliability, and ensures seamless deployment of applications on Kubernetes clusters.
▬▬▬▬▬▬ Additional Info
▬▬▬▬▬▬
Gist with the commands: https://gist.github.com/vfarcic/d54940365e29111539e6744c50eae794
KUTTL: https://kuttl.dev
How to run local multi-node Kubernetes clusters using kind: https://youtu.be/C0v5gJSWuSo
Is Timoni With CUE a Helm Replacement?: https://youtu.be/bbE1BFCs548
Do NOT Use Docker Compose! Develop In Kubernetes (With Okteto): https://youtu.be/RTo9Pvo_yiY
DevSpace – Development Environments in Kubernetes: https://youtu.be/nQly_CEjJc4
Development Environments Made Easy With Tilt Rebuilds And Live Updates: https://youtu.be/fkODRlobR9I
Skaffold – How to Build and Deploy In Kubernetes: https://youtu.be/qS_4Qf8owc0
▬▬▬▬▬▬ Sponsoships
▬▬▬▬▬▬
If you are interested in sponsoring this channel, please use https://calendly.com/vfarcic/meet to book a timeslot that suits and we’ll go over the details. Or feel free to contact me over Twitter or LinkedIn (see below)
▬▬▬▬▬▬ Livestreams & podcasts
▬▬▬▬▬▬
Podcast: https://www.devopsparadox.com/
Live streams: https://www.youtube.com/c/DevOpsParadox
▬▬▬▬▬▬ Contact me
▬▬▬▬▬▬
Follow me on Twitter: https://twitter.com/vfarcic
Follow me on LinkedIn: https://www.linkedin.com/in/viktorfarcic/
-
Docker
- How Kinsta Improved the End-to-End Development Experience by Dockerizing Every Step of the Production CycleGuest author Amin Choroomi is an experienced software developer at Kinsta. Passionate about Docker and Kubernetes, he specializes in application development and DevOps practices. His expertise lies in leveraging these transformative technologies to streamline deployment processes and enhance software scalability. One of the biggest challenges of developing and maintaining cloud-native applications at the enterprise level is having a consistent experience through the entire development lifecyc
How Kinsta Improved the End-to-End Development Experience by Dockerizing Every Step of the Production Cycle
Guest author Amin Choroomi is an experienced software developer at Kinsta. Passionate about Docker and Kubernetes, he specializes in application development and DevOps practices. His expertise lies in leveraging these transformative technologies to streamline deployment processes and enhance software scalability.
One of the biggest challenges of developing and maintaining cloud-native applications at the enterprise level is having a consistent experience through the entire development lifecycle. This process is even harder for remote companies with distributed teams working on different platforms, with different setups, and asynchronous communication.
At Kinsta, we have projects of all sizes for application hosting, database hosting, and managed WordPress hosting. We need to provide a consistent, reliable, and scalable solution that allows:
- Developers and quality assurance teams, regardless of their operating systems, to create a straightforward and minimal setup for developing and testing features.
- DevOps, SysOps, and Infrastructure teams to configure and maintain staging and production environments.

Overcoming the challenge of developing cloud-native applications on a distributed team
At Kinsta, we rely heavily on Docker for this consistent experience at every step, from development to production. In this article, we’ll walk you through:
- How to leverage Docker Desktop to increase developers’ productivity.
- How we build Docker images and push them to Google Container Registry via CI pipelines with CircleCI and GitHub Actions.
- How we use CD pipelines to promote incremental changes to production using Docker images, Google Kubernetes Engine, and Cloud Deploy.
- How the QA team seamlessly uses prebuilt Docker images in different environments.
Using Docker Desktop to improve the developer experience
Running an application locally requires developers to meticulously prepare the environment, install all the dependencies, set up servers and services, and make sure they are properly configured. When you run multiple applications, this approach can be cumbersome, especially when it comes to complex projects with multiple dependencies. And, when you introduce multiple contributors with multiple operating systems, chaos is installed. To prevent this, we use Docker.
With Docker, you can declare the environment configurations, install the dependencies, and build images with everything where it should be. Anyone, anywhere, with any OS can use the same images and have exactly the same experience as anyone else.
Declare your configuration with Docker Compose
To get started, you need to create a Docker Compose file, docker-compose.yml
. This is a declarative configuration file written in YAML format that tells Docker your application’s desired state. Docker uses this information to set up the environment for your application.
Docker Compose files come in handy when you have more than one container running and there are dependencies between containers.
To create your docker-compose.yml
file:
- Start by choosing an
image
as the base for our application. Search on Docker Hub to find a Docker image that already contains your app’s dependencies. Make sure to use a specific image tag to avoid errors. Using thelatest
tag can cause unforeseen errors in your application. You can use multiple base images for multiple dependencies — for example, one for PostgreSQL and one for Redis. - Use
volumes
to persist data on your host if you need to. Persisting data on the host machine helps you avoid losing data if Docker containers are deleted or if you have to recreate them. - Use
networks
to isolate your setup to avoid network conflicts with the host and other containers. It also helps your containers to find and communicate with each other easily.
Bringing it all together, we have a docker-compose.yml
that looks like this:
version: '3.8' services: db: image: postgres:14.7-alpine3.17 hostname: mk_db restart: on-failure ports: - ${DB_PORT:-5432}:5432 volumes: - db_data:/var/lib/postgresql/data environment: POSTGRES_USER: ${DB_USER:-user} POSTGRES_PASSWORD: ${DB_PASSWORD:-password} POSTGRES_DB: ${DB_NAME:-main} networks: - mk_network redis: image: redis:6.2.11-alpine3.17 hostname: mk_redis restart: on-failure ports: - ${REDIS_PORT:-6379}:6379 networks: - mk_network volumes: db_data: networks: mk_network: name: mk_network
Containerize the application
Build a Docker image for your application
To begin, we need to build a Docker image using a Dockerfile
, and then call that from docker-compose.yml
.
Follow these five steps to create your Dockerfile
file:
1. Start by choosing an image as a base. Use the smallest base image that works for the app. Usually, alpine images are minimal with nearly zero extra packages installed. You can start with an alpine image and build on top of that:
docker FROM node:18.15.0-alpine3.17
2. Sometimes you need to use a specific CPU architecture to avoid conflicts. For example, suppose that you use an arm64-based
processor but you need to build an amd64
image. You can do that by specifying the -- platform
in Dockerfile
:
docker FROM --platform=amd64 node:18.15.0-alpine3.17
3. Define the application directory and install the dependencies and copy the output to your root directory:
docker WORKDIR /opt/app COPY package.json yarn.lock ./ RUN yarn install COPY . .
4. Call the Dockerfile
from docker-compose.yml
:
services: ...redis ...db app: build: context: . dockerfile: Dockerfile platforms: - "linux/amd64" command: yarn dev restart: on-failure ports: - ${PORT:-4000}:${PORT:-4000} networks: - mk_network depends_on: - redis - db
5. Implement auto-reload so that when you change something in the source code, you can preview your changes immediately without having to rebuild the application manually. To do that, build the image first, then run it in a separate service:
services: ... redis ... db build-docker: image: myapp build: context: . dockerfile: Dockerfile app: image: myapp platforms: - "linux/amd64" command: yarn dev restart: on-failure ports: - ${PORT:-4000}:${PORT:-4000} volumes: - .:/opt/app - node_modules:/opt/app/node_modules networks: - mk_network depends_on: - redis - db - build-docker
Pro tip: Note that node_modules
is also mounted explicitly to avoid platform-specific issues with packages. This means that, instead of using the node_modules
on the host, the Docker container uses its own but maps it on the host in a separate volume.
Incrementally build the production images with continuous integration
The majority of our apps and services use CI/CD for deployment, and Docker plays an important role in the process. Every change in the main branch immediately triggers a build pipeline through either GitHub Actions or CircleCI. The general workflow is simple: It installs the dependencies, runs the tests, builds the Docker image, and pushes it to Google Container Registry (or Artifact Registry). In this article, we’ll describe the build step.
Building the Docker images
We use multi-stage builds for security and performance reasons.
Stage 1: Builder
In this stage, we copy the entire code base with all source and configuration, install all dependencies, including dev dependencies, and build the app. It creates a dist/
folder and copies the built version of the code there. This image is way too large, however, with a huge set of footprints to be used for production. Also, as we use private NPM registries, we use our private NPM_TOKEN
in this stage as well. So, we definitely don’t want this stage to be exposed to the outside world. The only thing we need from this stage is the dist/
folder.
Stage 2: Production
Most people use this stage for runtime because it is close to what we need to run the app. However, we still need to install production dependencies, and that means we leave footprints and need the NPM_TOKEN
. So, this stage is still not ready to be exposed. Here, you should also note the yarn cache clean
on line 19. That tiny command cuts our image size by up to 60 percent.
Stage 3: Runtime
The last stage needs to be as slim as possible with minimal footprints. So, we just copy the fully baked app from production and move on. We put all those yarn and NPM_TOKEN
stuff behind and only run the app.
This is the final Dockerfile.production
:
docker # Stage 1: build the source code FROM node:18.15.0-alpine3.17 as builder WORKDIR /opt/app COPY package.json yarn.lock ./ RUN yarn install COPY . . RUN yarn build # Stage 2: copy the built version and build the production dependencies FROM node:18.15.0-alpine3.17 as production WORKDIR /opt/app COPY package.json yarn.lock ./ RUN yarn install --production && yarn cache clean COPY --from=builder /opt/app/dist/ ./dist/ # Stage 3: copy the production ready app to runtime FROM node:18.15.0-alpine3.17 as runtime WORKDIR /opt/app COPY --from=production /opt/app/ . CMD ["yarn", "start"]
Note that, for all the stages, we start copying package.json
and yarn.lock
files first, installing the dependencies, and then copying the rest of the code base. The reason for this is that Docker builds each command as a layer on top of the previous one, and each build could use the previous layers if available and only build the new layers for performance purposes.
Let’s say you have changed something in src/services/service1.ts
without touching the packages. That means the first four layers of the builder stage are untouched and could be reused. This approach makes the build process incredibly faster.
Pushing the app to Google Container Registry through CircleCI pipelines
There are several ways to build a Docker image in CircleCI pipelines. In our case, we chose to use circleci/gcp-gcr orbs
:
Minimum configuration is needed to build and push our app, thanks to Docker.
executors: docker-executor: docker: - image: cimg/base:2023.03 orbs: gcp-gcr: circleci/gcp-gcr@0.15.1 jobs: ... deploy: description: Build & push image to Google Artifact Registry executor: docker-executor steps: ... - gcp-gcr/build-image: image: my-app dockerfile: Dockerfile.production tag: ${CIRCLE_SHA1:0:7},latest - gcp-gcr/push-image: image: my-app tag: ${CIRCLE_SHA1:0:7},latest
Pushing the app to Google Container Registry through GitHub Actions
As an alternative to CircleCI, we can use GitHub Actions to deploy the application continuously.
We set up gcloud
and build and push the Docker image to gcr.io
:
jobs: setup-build: name: Setup, Build runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v3 - name: Get Image Tag run: | echo "TAG=$(git rev-parse --short HEAD)" >> $GITHUB_ENV - uses: google-github-actions/setup-gcloud@master with: service_account_key: ${{ secrets.GCP_SA_KEY }} project_id: ${{ secrets.GCP_PROJECT_ID }} - run: |- gcloud --quiet auth configure-docker - name: Build run: |- docker build \ --tag "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:$TAG" \ --tag "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:latest" \ . - name: Push run: |- docker push "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:$TAG" docker push "gcr.io/${{ secrets.GCP_PROJECT_ID }}/my-app:latest"
With every small change pushed to the main branch, we build and push a new Docker image to the registry.
Deploying changes to Google Kubernetes Engine using Google Delivery Pipelines
Having ready-to-use Docker images for each and every change also makes it easier to deploy to production or roll back in case something goes wrong. We use Google Kubernetes Engine to manage and serve our apps, and we use Google Cloud Deploy and Delivery Pipelines for our continuous deployment process.
When the Docker image is built after each small change (with the CI pipeline shown previously), we take one step further and deploy the change to our dev cluster using gcloud
. Let’s look at that step in CircleCI pipeline:
- run: name: Create new release command: gcloud deploy releases create release-${CIRCLE_SHA1:0:7} --delivery-pipeline my-del-pipeline --region $REGION --annotations commitId=$CIRCLE_SHA1 --images my-app=gcr.io/${PROJECT_ID}/my-app:${CIRCLE_SHA1:0:7}
This step triggers a release process to roll out the changes in our dev Kubernetes cluster. After testing and getting the approvals, we promote the change to staging and then production. This action is all possible because we have a slim isolated Docker image for each change that has almost everything it needs. We only need to tell the deployment which tag to use.
How the Quality Assurance team benefits from this process
The QA team needs mostly a pre-production cloud version of the apps to be tested. However, sometimes they need to run a prebuilt app locally (with all the dependencies) to test a certain feature. In these cases, they don’t want or need to go through all the pain of cloning the entire project, installing npm packages, building the app, facing developer errors, and going over the entire development process to get the app up and running.
Now that everything is already available as a Docker image on Google Container Registry, all the QA team needs is a service in Docker compose file:
services: ...redis ...db app: image: gcr.io/${PROJECT_ID}/my-app:latest restart: on-failure ports: - ${PORT:-4000}:${PORT:-4000} environment: - NODE_ENV=production - REDIS_URL=redis://redis:6379 - DATABASE_URL=postgresql://${DB_USER:-user}:${DB_PASSWORD:-password}@db:5432/main networks: - mk_network depends_on: - redis - db
With this service, the team can spin up the application on their local machines using Docker containers by running:
docker compose up
This is a huge step toward simplifying testing processes. Even if QA decides to test a specific tag of the app, they can easily change the image tag on line 6 and re-run the Docker compose command. Even if they decide to compare different versions of the app simultaneously, they can easily achieve that with a few tweaks. The biggest benefit is to keep our QA team away from developer challenges.
Advantages of using Docker
- Almost zero footprints for dependencies: If you ever decide to upgrade the version of Redis or PostgreSQL, you can just change one line and re-run the app. There’s no need to change anything on your system. Additionally, if you have two apps that both need Redis (maybe even with different versions) you can have both running in their own isolated environment, without any conflicts with each other.
- Multiple instances of the app: There are many cases where we need to run the same app with a different command, such as initializing the DB, running tests, watching DB changes, or listening to messages. In each of these cases, because we already have the built image ready, we just add another service to the Docker compose file with a different command, and we’re done.
- Easier testing environment: More often than not, you just need to run the app. You don’t need the code, the packages, or any local database connections. You only want to make sure the app works properly, or need a running instance as a backend service while you’re working on your own project. That could also be the case for QA, Pull Request reviewers, or even UX folks who want to make sure their design has been implemented properly. Our Docker setup makes it easy for all of them to take things going without having to deal with too many technical issues.
Learn more
- Get the latest release of Docker Desktop.
- Have questions? The Docker community is here to help.
- New to Docker? Get started.
- Visit the Kinsta site to learn about the cloud platform.