Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Learn Docker in a Month: your week 4 guide

11 octobre 2020 à 19:15
Learn Docker in a Month: your week 4 guide

The YouTube series of my book Learn Docker in a Month of Lunches is all done! The final five episodes dig into some more advanced topics which are essential in your container journey, with the theme: getting your containers ready for production.

The whole series is on the Learn Docker in a Month of Lunches playlist and you can find out about the book at the DIAMOL homepage

Episode 16: Optimizing your Docker images for size, speed and security

It's easy to get started with Docker, packaging your apps into images using basic Dockerfiles. But you really need a good understanding of the best practices to safe yourself from trouble later on.

Docker images are composed of multiple layers, and layers can be cached and shared between images. That's what makes container images so lightweight - similar apps can share all the common layers. Knowing how the cache works and how to make the best use of it speeds up your build times and reduces image size.

Smaller images mean faster network transfers and less disk usage, but they have a bigger impact too. The space you save is typically from removing software your apps don't actually need to run, and that reduces the attack surface for your application in production - here's how optimization counts:

Learn Docker in a Month: your week 4 guide

This episode covers all those with recommendations for using multi-stage Dockerfiles to optimize your builds and your runtime images.

Episode 17: Application configuration management in containers

Your container images should be generic - you should run the same image in every environment. The image is the packaging format and one of the main advantages of Docker is that you can be certain the app you deploy to production will work in the same way as the test environment, because it has the exact same set of binaries in the image.

Images are built in the CI process and then deployed by running containers from the image in the test environments and then onto production. Every environment uses the same image, so to allow for different setups in each environment your application needs to be able to read configuration from the container environment.

Docker creates that environment and you can set configuration using environment variables or files. Your application needs to look for settings in known locations and then you can provide those settings in your Dockerfile and container run commands. The typical approach is to use a hierarchy of config sources, which can be set by the container platform and read by the app:

Learn Docker in a Month: your week 4 guide

Episode 17 walks through different variations of that config hierarchy in Docker, using examples in Node.js with node-config, Go with Viper and the standard config systems in .NET Core and Java Spring Boot.

Episode 18: Writing and managing application logs with Docker

Docker adds a consistent management layer to all your apps - you don't need to know what tech stack they use or how they're configured to know that you start them with docker run and you can monitor them with docker top and docker logs. For that to work, your app needs to fit with the conventions Docker expects.

Container logs are collected from the standard output and standard error streams of the startup process (the one in the CMD or ENTRYPOINT instruction). Modern app platforms run as foreground processes which fits neatly with Docker's expectations. Older apps might write to a different log sink which means you need to relay logs from a file (or other source) to standard out.

You can do that in your Dockerfile without any changes to your application which means old and new apps behave in the same way when they're running in containers:

Learn Docker in a Month: your week 4 guide

This episode shows you how to get logs out from your apps into containers, and then collect those logs from Docker and forward them to a central system for storage and searching - using the EFK stack (Elasticsearch, Fluentd and Kibana).

Episode 19: Controlling HTTP traffic to containers with a reverse proxy

The series ends with a couple of more in-depth topics which will help you understand how your application architecture might look as you migrate more apps to containers. The first is managing network traffic using a reverse proxy.

A reverse proxy runs in a container and publishes ports to Docker. It's the only publicly-accessible component, all your other containers are internal and can only be reached by other containers on the same Docker network. The reverse proxy receives all incoming traffic and fetches the content from the application container:

Learn Docker in a Month: your week 4 guide

Reverse proxies can do a lot of work for you - SSL termination, response caching, sticky sessions - and we see them all in this episode. The demos use two of the most popular technologies in this space, Nginx and Traefik and helps you to evaluate them.

Episode 20: Asynchronous communication with a message queue

This is one of my favourite topics. Message queues let components of your apps communicate asynchronously - decoupling the consumer and the service. It's a great way to add reliability and scale to your architecture, but it used to be complex and expensive before Docker.

Now you can run an enterprise-grade message queue like NATS in a container with minimal effort and start moving your apps to a modern event-driven approach. With a message queue in place you can have multiple features triggering in response to events being created:

Learn Docker in a Month: your week 4 guide

This is an enabler for all sorts of patterns, and episode 20 walks you through a few of them: decoupling a web app from the database to increase scale and adding new features without changing the existing application.

This episode also covers Chapter 22 of the book, with some tips on helping you to gain adoption for Docker in your organization.

And next... Elton's Container Show (ECS)

That's all for the book serialization. I'll do the same thing when my new book Learn Kubernetes in a Month of Lunches gets released - it's in the finishing stages now and you can read all the chapters online.

In the meantime I have a new YouTube show all about containers called... Elton's Container Show. It runs once a week and each month I'll focus on a particular topic. The first topic is Windows containers and then I'll move on to orchestration.

You'll find all the info here at https://eltons.show and the first episode is ECS-W1: We Need to Talk About Windows Containers.

Hope to see you there :)

LEARN DOCKER IN ONE MONTH! Your week 3 guide.

20 septembre 2020 à 19:32
LEARN DOCKER IN ONE MONTH! Your week 3 guide.

My YouTube series to help you learn Docker continued this week with five more episodes. The theme for the week is running at scale with a container orchestrator.

You can find all the episodes on the Learn Docker in a Month of Lunches playlist, and more details about the book at https://diamol.net

Episode 11: Understanding Orchestration - Docker Swarm and Kubernetes

Orchestration is how you run containers at scale in a production environment. You join together a bunch of servers which have Docker running - that's called the cluster. Then you install orchestration software to manage containers for you. When you deploy an application you send a description of the desired state of your app to the cluster. The orchestrator creates containers to run your app and makes sure they keep running even if there are problems with individual servers.

The most common container orchestrators are Docker Swarm and Kubernetes. They have very different ways of modelling your applications and different feature sets, but they work in broadly the same way. They manage the containers running on the server and they expose an API endpoint you use for deployment and administration:

LEARN DOCKER IN ONE MONTH! Your week 3 guide.

This episode walks through the main features of orchestration, like high availability and scale, and the abstractions they provide for compute, networking and storage. The exercises all use Docker Swarm which is very simple to set up - it's a one-line command once you have Docker installed:

docker swarm init

And that's it :)

Swarm uses the Docker Compose specification to model applications so it's very simple to get started. In the episode I compare Swarm and Kubernetes and suggest starting with Swarm - even if you plan to use Kubernetes some day. The learning curve for Swarm is much smoother than Kubernetes, and once you know Swarm you'll have a good understanding of orchestration which will help you learn Kubernetes (although you'll need a good book to help you, something like Learn Kubernetes in a Month of Lunches).

Episode 12: Deploying Distributed Apps as Stacks in Docker Swarm

Container orchestrators use a distributed database to store application definitions, and your deployments can include custom data for your application - which you can use for configuration settings. That lets you use the same container images which have passed all your automated and manual tests in your production environment, but with production settings applied.

Promoting the same container image up through environments is how you guarantee your production deployment is using the exact same binaries you've approved in testing. The image contains the whole application stack so there's no danger of the deployment failing on missing dependencies or version mismatches.

To support different behaviour in production you create your config objects in the cluster and reference them in your application manifest (the YAML file which is in Docker Compose format if you use Swarm, or Kubernetes' own format). When the orchestrator creates a container which references a config object it surfaces the content of the object as a files in the container filesystem - as in this sample Docker Swarm manifest:

todo-web:
  image: diamol/ch06-todo-list
  ports:
    - 8080:80
  configs:
    - source: todo-list-config
      target: /app/config/config.json

Config objects are used for app configuration and secrets are used in the same way, but for sensitive data. The episode shows you how to use them and includes other considerations for deploying apps in Swarm mode - including setting compute limits on the containers and persisting data in volumes.

Episode 13: Automating Releases with Upgrades and Rollbacks

One of the goals of orchestration is to have the cluster manage the application for you, and that includes providing zero-downtime updates. Swarm and Kubernetes both provide automated rollouts when you upgrade your applications. The containers running your app are gradually updated, with the ones running the old version replaced with new containers running the new version.

During the rollout the new containers are monitored to make sure they're healthy. If there are any issues then the rollout can be stopped or automatically rolled back to the previous version. The episode walks through several updates and rollbacks, demonstrating all the different configurations you can apply to control the process. This is the overall process you'll see when you watch:

LEARN DOCKER IN ONE MONTH! Your week 3 guide.

You can configure all the aspects of the rollout - how many containers are started, whether the new ones are started first or the old ones are removed first, how long to monitor the health of new containers and what to do if they're not healthy. You need a pretty good understanding of all the options so you can plan your rollouts and know how they'll behave if there's a problem.

Episode 14: Configuring Docker for Secure Remote Access and CI/CD

The Docker command line doesn't do much by itself - it just sends instructions to the Docker API which is running on your machine. The API is part of the Docker Engine, which is what runs and manages containers. You can expose the API to make it remotely available, which means you can manage your Docker servers in the cloud from the Docker command line running on your laptop.

There are good and bad ways to expose the API, and this episode covers the different approaches - including secure access using SSH and TLS. You'll also learn how to use remote machines as Docker Contexts so you can easily apply your credentials and switch between machines with commands like this:

# create a context using TLS certs:
docker context create x--docker "host=tcp://x,ca=/certs/ca.pem,cert=/certs/client-cert.pem,key=/certs/client-key.pem"

# for SSH it would be:
docker context create y --docker "host=ssh://user@y

# connect:
docker context use x

# now this will list containers running on server x:
docker ps

You'll also see why using environment variables is preferable to docker context use...

Remote access is how you enable the Continuous Deployment part of the CI/CD pipeline. This episode uses Play with Docker, an online Docker playground, as a remote target for a deployment running in Jenkins on a local container. It's a pretty slick exercise (if I say so myself), which you can try out in chapter 15 of Learn Docker in a Month of Lunches.

This feature-packed episode ends with an overview of the access model in Docker, and explains why you need to carefully control who has access to your machines.

Episode 15: Building Docker Images that Run Anywhere: Linux, Windows, Intel & Arm

Every exercise in the book uses Docker images which are built to run on any of the main computing architectures - Windows and Linux operating systems, Intel and Arm CPUs - so you can follow along whatever computer you're using. In this episode you'll find out how that works, with multi-architecture images. A multi-arch image is effectively one image tag which has multiple variants:

LEARN DOCKER IN ONE MONTH! Your week 3 guide.

There are two ways to create multi-arch images: build and push all the variants yourself and then push a manifest to the registry which describes the variants, or have the new Docker buildx plugin do it all for you. The episode covers both options with lots of examples and shows you the benefits and limitations of each.

You'll also learn why multi-arch images are important (example: you could cut your cloud bills almost in half by running containers on Arm instead of Intel on AWS), and the Dockerfile best practices for supporting multi-arch builds.

Coming next

Week 3 covered orchestration and some best practices for production deployments. Week 4 is the final week and the theme is production readiness. You'll learn everything you need to take a Docker proof-of-concept project into production, including optimizing your images, managing app configuration and logging, and controlling traffic to your application with a reverse proxy.

The live stream is running through September 2020 and kicks off on my YouTube channel weekdays at 19:00 UTC. The episodes are available to watch on demand as soon as the session ends.

Hope you can join me on the final leg of your journey to learn Docker in one month :)

Learn Docker in *ONE MONTH*. Your guide to week 2.

13 septembre 2020 à 20:07
Learn Docker in *ONE MONTH*. Your guide to week 2.

I've added five more episodes to my YouTube series Learn Docker in a Month of Lunches. You can find the overview at https://diamol.net and the theme for week 2 is:

Running distributed applications in containers

This follows a tried-and-trusted path for learning Docker which I've used for workshops and training sessions over many years. Week 1 is all about getting used to Docker and the key container concepts. Week 2 is about understanding what Docker enables you to do, focusing on multi-container apps with Docker Compose.

Episode 6: Running multi-container apps with Docker Compose

Docker Compose is a tool for modelling and managing containers. You model your apps in a YAML file and use the Compose command line to start and stop the whole app or individual components.

We start with a nice simple docker-compose.yml file which models a single container plugged into a Docker network called nat:

version: '3.7'

services:
  
  todo-web:
    image: diamol/ch06-todo-list
    ports:
      - "8020:80"
    networks:
      - app-net

networks:
  app-net:
    external:
      name: nat

The services section defines a single component called todo-web which will run in a container. The configuration for the service includes the container image to use, the ports to publish and the Docker network to connect to.

Docker Compose files effectively capture all the options you would put in a docker run command, but using a declarative approach. When you deploy apps with Compose it uses the spec in the YAML file as the desired state. It looks at the current state of what's running in Docker and creates/updates/removes objects (like containers or networks) to get to the desired state.

Here's how to run that app with Docker Compose:

# in this example the network needs to exist first:
docker network create nat

# compose will create the container:
docker-compose up

In the episode you'll see how to build on that, defining distributed apps which run across multiple containers in a single Compose file and exploring the commands to manage them.

You'll also learn how you can inject configuration settings into containerized apps using Compose, and you'll understand the limitations of what Compose can do.

Episode 7: Supporting reliability with health checks and dependency checks

Running your apps in containers unlocks new possibilities, like scaling up and down on demand and spreading your workload across a highly-available cluster of machines. But a distributed architecture introduces new failure modes too, like slow connections and timeouts from unresponsive services.

Docker lets you build reliability into your container images so the platform you use can understand if your applications are healthy and take corrective action if they're not. That gets you on the path to self-healing applications, which manage themselves through transient failures.

The first part of this is the Docker HEALTHCHECK instruction which lets you configure Docker to test if your application inside the container is healthy - here's the simplest example in a Dockerfile:

# builder stage omitted in this snippet
FROM diamol/dotnet-aspnet

ENTRYPOINT ["dotnet", "/app/Numbers.Api.dll"]
HEALTHCHECK CMD curl --fail http://localhost/health

WORKDIR /app
COPY --from=builder /out/ .

This is a basic example which uses curl - I've already written about why it's a bad idea to use curl for container healthchecks and you'll see in this episode the better practice of using the application runtime for your healthcheck.

When a container image has a healthcheck specified, Docker runs the command inside the container to see if the application is healthy. If it's unhealthy for a number of successive checks (the default is three) the Docker API raises an event. Then the container platform can take corrective action like restarting the container or removing it and replacing it.

This episode also covers dependency checks, which you can use in your CMD or ENTRYPOINT instruction to verify your app has all the dependencies it needs before it starts. This is useful in scenarios where components can't do anything useful if they're missing dependencies - but without the check it the container would start and it would look as if everything was OK.

Episode 8: Adding observability with containerized monitoring

Healthchecks and dependency checks get you a good way to reliability, but you also need to see what's going on inside your containers for situations where things go wrong in unexpected ways.

One of the big issues for ops teams moving from VMs to containers is going from a fairly static environment with a known set of machines to monitor, to a dynamic environment where containers appear and disappear all the time.

This episode introduces the typical monitoring stack for containerized apps using Prometheus. In this architecture all your containers expose metrics in an HTTP endpoint, as do your Docker servers. Prometheus runs in a container too and it collects those metrics and stores them in a time-series database.

Learn Docker in *ONE MONTH*. Your guide to week 2.

You need to add metrics to your app using a Prometheus client library, which will provide a set of runtime metrics (like memory and CPU usage) for free. The client library also gives you a simple way to capture your own metrics.

The demo apps for this module have components in .NET, Go, Java and Node.js so you can see how to use client libraries in different languages and wire them up to Prometheus.

You'll learn how to run a monitoring solution in containers alongside your application, all modelled in Docker Compose. One of the great benefits of containerized monitoring is that you can run the same tools in every environment - so developers can use the same Grafana dashboard that ops use in production.

Episode 9: Running multiple environments with Docker Compose

Docker is great for density - running lots of containers on very little hardware. You particularly see that for non-production environments where you don't need high availability and you don't have a lot of traffic to deal with.

This episode shows you how to run multiple environments - different configurations of the same application - on a single server. It covers more advanced Compose topics like override files and extension fields.

You'll also learn how to apply configuration to your apps with different approaches in the Compose file, like this docker-compose.yml example:

version: "3.7"

services:
  todo-web:
    ports:
      - 8089:80
    environment:
      - Database:Provider=Sqlite
    env_file:
      - ./config/logging.debug.env

secrets:
  todo-db-connection:
    file: ./config/empty.json

The episode has lots of examples of how you can use Compose to model different configurations of the same application, while keeping your Compose files clean and easy to manage.

Episode 10: Building and testing applications with Docker and Docker Compose

Containers make it easy to build a Continuous Integration pipeline where every component runs in Docker and you can dispense with build servers that need careful nurturing.

This epsiode shows you how to build a simple pipeline using techniques you learned in earlier episodes - like multi-stage Dockerfiles - to keep your CI process portable and maintainable.

You'll see how to run a complete build infrastructure in containers, using Gogs as the Git server, Jenkins to trigger the builds, and a local Docker registry in a container. The exercises focus on the patterns rather than the individual tools, so all the setup is done for you.

The easy way to keep your pipeline definitions clean is to use Docker Compose to model the build workflow as well as the runtime spec. This docker-compose-build.yml file is an override file which isolates the build settings, and uses variables and extension fields to reduce duplication:

version: "3.7"

x-args: &args
  args:
    BUILD_NUMBER: ${BUILD_NUMBER:-0}
    BUILD_TAG: ${BUILD_TAG:-local}

services:
  numbers-api:
    build:
      context: numbers
      dockerfile: numbers-api/Dockerfile.v4
      <<: *args

  numbers-web:
    build:
      context: numbers
      dockerfile: numbers-web/Dockerfile.v4
      <<: *args

Of course you're more likely to use managed services like GitHub and Azure DevOps, but the principle is the same - keep all the logic in your Dockerfiles and your Docker Compose files, and all you need from your service provider is Docker. That makes it super easy to migrate between providers without rewriting all your build scripts.

This episode also covers the secure software supply chain, extending your pipeline to include security scanning and signing so you can be sure the containers you run in production are safe.

Coming next

Week 2 covered multi-container apps, and in week 3 we move on to orchestration. We'll use Docker Swarm which is the production-grade orchestrator built into Docker. It's simpler than Kubernetes (which needs it's own series - Learn Kubernetes in a Month of Lunches will be televised in 2021), and it uses the now-familiar Docker Compose specification to model apps.

You can always find the upcoming episode at diamol.net/stream and there are often book giveaways at diamol.net/giveaway.

The live stream is running through September 2020 and kicks off on Elton Stoneman's YouTube channel weekdays at 19:00 UTC. The episodes are available to watch on demand as soon as the session ends.

Hope you can join me and continue to make progress in your Docker journey :)

Learn Docker in one month! Your guide to week 1

6 septembre 2020 à 19:38
Learn Docker in one month! Your guide to week 1

I'm streaming every chapter of my new book Learn Docker in a Month of Lunches on YouTube, and the first week's episodes are out now.

Here's the Learn Docker in a Month of Lunches playlist.

The book is aimed at new and improving Docker users, it starts from the basics - with best practices built in - and moves on to more advanced topics like production readiness, orchestration, observability and HTTP routing.

It's a hands-on introduction to Docker, and the learning path is one I've honed from teaching Docker and Kubernetes at conference workshops and at clients for many years. Every exercise is built to work on Mac, Windows and Arm machines so you can follow along with whatever tech you like.

Episode 1: Understanding Docker and running Hello, World

You start by learning what a container is - a virtualized environment around the processes which make up an application. The container shares the OS kernel of the machine it's running on, which makes Docker super efficient and lightweight.

The very first exercise gets you to run a simple app in a container to see what the virtual environment looks like (all you need to follow along is Docker):

docker container run diamol/ch02-hello-diamol

That container just prints some information and exits. In the rest of the episode (which covers chapters 1 & 2 of the book), you'll learn about different ways to run containers, and how containers are different from other types of virtualization.

Episode 2: Building your own Docker images

You package your application into an image so you can run it in containers. All the exercises so far use images which I've already built, and this chapter introduces the Dockerfile syntax and shows you how to build your own images.

An important best practice is to make your container images portable - so in production you use the exact same Docker image that you've tested and approved in other environments. That means no gaps in the release, the deployment is the same set of binaries that you've successfully deployed in test.

Portable images need to be able to read configuration from the environment, so you can tweak the behaviour of your apps even though the image is the same. You'll run an exercise like this which shows you how to inject configuration settings using environment variables:

docker container run --env TARGET=google.com diamol/ch03-web-ping

Watch the episode to learn how that works, and to understand how images are stored as layers. That affects build speeds, image size and the security profile of your app, so it's fundamental to understanding about image optimization.

Episode 3: Packaging apps from source code into Docker images

The Dockerfile syntax is pretty simple and you can use it to copy binaries from your machine into the container image, or download and extract archives from a web address.

But things get more interesting with multi-stage Dockerfiles which you can use to compile applications in from source code using Docker. The exercises in this chapter use Go, Java and Node.js - and you don't need any of those runtimes installed on your machine because all the tools run inside containers.

Here's a sample Dockerfile for a Java app built with Maven:

FROM diamol/maven AS builder

WORKDIR /usr/src/iotd
COPY pom.xml .
RUN mvn -B dependency:go-offline

COPY . .
RUN mvn package

# app
FROM diamol/openjdk

WORKDIR /app
COPY --from=builder /usr/src/iotd/target/iotd-service-0.1.0.jar .

EXPOSE 80
ENTRYPOINT ["java", "-jar", "/app/iotd-service-0.1.0.jar"]

All the tools to download libraries, compile and package the app are in the SDK image - using Maven in this example. The final image is based on a much smaller image with just the Java runtime installed and none of the additional tools.

This approach is supported in all the major languages and it effectively means you can use Docker as your build server and everyone in the team has the exact same toolset because everyone uses the same images.

Episode 4: Sharing images with Docker Hub and other registries

Building your own images means you can run your apps in containers, but if you want to make them available to other people you need to share them on a registry like Docker Hub.

This chapter teaches you about image references and how you can use tags to version your applications. If you've only ever used the latest tag then you should watch this one to understand why that's a moving target and explicit version tags are a much better approach.

You'll push images to Docker Hub in the exercises (you can sign up for a free account with generous usage levels) and you'll also learn how to run your own registry server in a container with a simple command like this:

docker container run -d -p 5000:5000 --restart always diamol/registry

It's usually better to use a managed registry like Docker Hub or Azure Container Registry but it's useful to know how to run a registry in your own organization. It can be a simple backup plan if your provider has an outage or you lose Internet connectivity.

This chapter also explains the concept of golden images which your organization can use to ensure all your apps are running from an approved set of base images, curated by an infrastructure or security team.

Episode 5: Using Docker volumes for persistent storage

Containers are great for stateless apps, and you can run apps which write data in containers too - as long as you understand where the data goes. This episode walks you through the container filesystem so you can see how the disk which the container sees is actually composed from multiple sources.

Persisting state is all about separating the lifecycle of the data from the lifecycle of the container. When you update your apps in production you'll delete the existing container and replace it with a new one from the new application image. You can attach the storage from the old container to the new one so all the data is there.

You'll learn how to do that with Docker volumes and with bind mounts, in exercises which use a simple to-do list app that stores data in a Sqlite database file:

docker container run --name todo1 -d -p 8010:80 diamol/ch06-todo-list

# browse to http://localhost:8010

There are some limitations to mounting external data sources into the container filesystem which you'll learn all about in the chapter.

Coming next

Week 1 covers the basics of Docker: containers, images, registries and storage. Week 2 looks at running multi-container apps, introducing Docker Compose to manage multiple containers and approaches to deal with distributed applications - including monitoring and healthchecks.

You can always find the upcoming episode at diamol.net/stream and there are often book giveaways at diamol.net/giveaway.

The live stream is running through September 2020 and kicks off on Elton Stoneman's YouTube channel weekdays at 19:00 UTC. The episodes are available to watch on demand as soon as the session ends.

Hope you can join me and make progress in your Docker journey :)

Learn Docker in a Month: your week 4 guide

11 octobre 2020 à 19:15
Learn Docker in a Month: your week 4 guide

The YouTube series of my book Learn Docker in a Month of Lunches is all done! The final five episodes dig into some more advanced topics which are essential in your container journey, with the theme: getting your containers ready for production.

The whole series is on the Learn Docker in a Month of Lunches playlist and you can find out about the book at the DIAMOL homepage

Episode 16: Optimizing your Docker images for size, speed and security

It's easy to get started with Docker, packaging your apps into images using basic Dockerfiles. But you really need a good understanding of the best practices to safe yourself from trouble later on.

Docker images are composed of multiple layers, and layers can be cached and shared between images. That's what makes container images so lightweight - similar apps can share all the common layers. Knowing how the cache works and how to make the best use of it speeds up your build times and reduces image size.

Smaller images mean faster network transfers and less disk usage, but they have a bigger impact too. The space you save is typically from removing software your apps don't actually need to run, and that reduces the attack surface for your application in production - here's how optimization counts:

Learn Docker in a Month: your week 4 guide

This episode covers all those with recommendations for using multi-stage Dockerfiles to optimize your builds and your runtime images.

Episode 17: Application configuration management in containers

Your container images should be generic - you should run the same image in every environment. The image is the packaging format and one of the main advantages of Docker is that you can be certain the app you deploy to production will work in the same way as the test environment, because it has the exact same set of binaries in the image.

Images are built in the CI process and then deployed by running containers from the image in the test environments and then onto production. Every environment uses the same image, so to allow for different setups in each environment your application needs to be able to read configuration from the container environment.

Docker creates that environment and you can set configuration using environment variables or files. Your application needs to look for settings in known locations and then you can provide those settings in your Dockerfile and container run commands. The typical approach is to use a hierarchy of config sources, which can be set by the container platform and read by the app:

Learn Docker in a Month: your week 4 guide

Episode 17 walks through different variations of that config hierarchy in Docker, using examples in Node.js with node-config, Go with Viper and the standard config systems in .NET Core and Java Spring Boot.

Episode 18: Writing and managing application logs with Docker

Docker adds a consistent management layer to all your apps - you don't need to know what tech stack they use or how they're configured to know that you start them with docker run and you can monitor them with docker top and docker logs. For that to work, your app needs to fit with the conventions Docker expects.

Container logs are collected from the standard output and standard error streams of the startup process (the one in the CMD or ENTRYPOINT instruction). Modern app platforms run as foreground processes which fits neatly with Docker's expectations. Older apps might write to a different log sink which means you need to relay logs from a file (or other source) to standard out.

You can do that in your Dockerfile without any changes to your application which means old and new apps behave in the same way when they're running in containers:

Learn Docker in a Month: your week 4 guide

This episode shows you how to get logs out from your apps into containers, and then collect those logs from Docker and forward them to a central system for storage and searching - using the EFK stack (Elasticsearch, Fluentd and Kibana).

Episode 19: Controlling HTTP traffic to containers with a reverse proxy

The series ends with a couple of more in-depth topics which will help you understand how your application architecture might look as you migrate more apps to containers. The first is managing network traffic using a reverse proxy.

A reverse proxy runs in a container and publishes ports to Docker. It's the only publicly-accessible component, all your other containers are internal and can only be reached by other containers on the same Docker network. The reverse proxy receives all incoming traffic and fetches the content from the application container:

Learn Docker in a Month: your week 4 guide

Reverse proxies can do a lot of work for you - SSL termination, response caching, sticky sessions - and we see them all in this episode. The demos use two of the most popular technologies in this space, Nginx and Traefik and helps you to evaluate them.

Episode 20: Asynchronous communication with a message queue

This is one of my favourite topics. Message queues let components of your apps communicate asynchronously - decoupling the consumer and the service. It's a great way to add reliability and scale to your architecture, but it used to be complex and expensive before Docker.

Now you can run an enterprise-grade message queue like NATS in a container with minimal effort and start moving your apps to a modern event-driven approach. With a message queue in place you can have multiple features triggering in response to events being created:

Learn Docker in a Month: your week 4 guide

This is an enabler for all sorts of patterns, and episode 20 walks you through a few of them: decoupling a web app from the database to increase scale and adding new features without changing the existing application.

This episode also covers Chapter 22 of the book, with some tips on helping you to gain adoption for Docker in your organization.

And next... Elton's Container Show (ECS)

That's all for the book serialization. I'll do the same thing when my new book Learn Kubernetes in a Month of Lunches gets released - it's in the finishing stages now and you can read all the chapters online.

In the meantime I have a new YouTube show all about containers called... Elton's Container Show. It runs once a week and each month I'll focus on a particular topic. The first topic is Windows containers and then I'll move on to orchestration.

You'll find all the info here at https://eltons.show and the first episode is ECS-W1: We Need to Talk About Windows Containers.

Hope to see you there :)

LEARN DOCKER IN ONE MONTH! Your week 3 guide.

20 septembre 2020 à 19:32
LEARN DOCKER IN ONE MONTH! Your week 3 guide.

My YouTube series to help you learn Docker continued this week with five more episodes. The theme for the week is running at scale with a container orchestrator.

You can find all the episodes on the Learn Docker in a Month of Lunches playlist, and more details about the book at https://diamol.net

Episode 11: Understanding Orchestration - Docker Swarm and Kubernetes

Orchestration is how you run containers at scale in a production environment. You join together a bunch of servers which have Docker running - that's called the cluster. Then you install orchestration software to manage containers for you. When you deploy an application you send a description of the desired state of your app to the cluster. The orchestrator creates containers to run your app and makes sure they keep running even if there are problems with individual servers.

The most common container orchestrators are Docker Swarm and Kubernetes. They have very different ways of modelling your applications and different feature sets, but they work in broadly the same way. They manage the containers running on the server and they expose an API endpoint you use for deployment and administration:

LEARN DOCKER IN ONE MONTH! Your week 3 guide.

This episode walks through the main features of orchestration, like high availability and scale, and the abstractions they provide for compute, networking and storage. The exercises all use Docker Swarm which is very simple to set up - it's a one-line command once you have Docker installed:

docker swarm init  

And that's it :)

Swarm uses the Docker Compose specification to model applications so it's very simple to get started. In the episode I compare Swarm and Kubernetes and suggest starting with Swarm - even if you plan to use Kubernetes some day. The learning curve for Swarm is much smoother than Kubernetes, and once you know Swarm you'll have a good understanding of orchestration which will help you learn Kubernetes (although you'll need a good book to help you, something like Learn Kubernetes in a Month of Lunches).

Episode 12: Deploying Distributed Apps as Stacks in Docker Swarm

Container orchestrators use a distributed database to store application definitions, and your deployments can include custom data for your application - which you can use for configuration settings. That lets you use the same container images which have passed all your automated and manual tests in your production environment, but with production settings applied.

Promoting the same container image up through environments is how you guarantee your production deployment is using the exact same binaries you've approved in testing. The image contains the whole application stack so there's no danger of the deployment failing on missing dependencies or version mismatches.

To support different behaviour in production you create your config objects in the cluster and reference them in your application manifest (the YAML file which is in Docker Compose format if you use Swarm, or Kubernetes' own format). When the orchestrator creates a container which references a config object it surfaces the content of the object as a files in the container filesystem - as in this sample Docker Swarm manifest:

todo-web:  
  image: diamol/ch06-todo-list
  ports:
    - 8080:80
  configs:
    - source: todo-list-config
      target: /app/config/config.json

Config objects are used for app configuration and secrets are used in the same way, but for sensitive data. The episode shows you how to use them and includes other considerations for deploying apps in Swarm mode - including setting compute limits on the containers and persisting data in volumes.

Episode 13: Automating Releases with Upgrades and Rollbacks

One of the goals of orchestration is to have the cluster manage the application for you, and that includes providing zero-downtime updates. Swarm and Kubernetes both provide automated rollouts when you upgrade your applications. The containers running your app are gradually updated, with the ones running the old version replaced with new containers running the new version.

During the rollout the new containers are monitored to make sure they're healthy. If there are any issues then the rollout can be stopped or automatically rolled back to the previous version. The episode walks through several updates and rollbacks, demonstrating all the different configurations you can apply to control the process. This is the overall process you'll see when you watch:

LEARN DOCKER IN ONE MONTH! Your week 3 guide.

You can configure all the aspects of the rollout - how many containers are started, whether the new ones are started first or the old ones are removed first, how long to monitor the health of new containers and what to do if they're not healthy. You need a pretty good understanding of all the options so you can plan your rollouts and know how they'll behave if there's a problem.

Episode 14: Configuring Docker for Secure Remote Access and CI/CD

The Docker command line doesn't do much by itself - it just sends instructions to the Docker API which is running on your machine. The API is part of the Docker Engine, which is what runs and manages containers. You can expose the API to make it remotely available, which means you can manage your Docker servers in the cloud from the Docker command line running on your laptop.

There are good and bad ways to expose the API, and this episode covers the different approaches - including secure access using SSH and TLS. You'll also learn how to use remote machines as Docker Contexts so you can easily apply your credentials and switch between machines with commands like this:

# create a context using TLS certs:
docker context create x--docker "host=tcp://x,ca=/certs/ca.pem,cert=/certs/client-cert.pem,key=/certs/client-key.pem"

# for SSH it would be:
docker context create y --docker "host=ssh://user@y

# connect:
docker context use x

# now this will list containers running on server x:
docker ps  

You'll also see why using environment variables is preferable to docker context use...

Remote access is how you enable the Continuous Deployment part of the CI/CD pipeline. This episode uses Play with Docker, an online Docker playground, as a remote target for a deployment running in Jenkins on a local container. It's a pretty slick exercise (if I say so myself), which you can try out in chapter 15 of Learn Docker in a Month of Lunches.

This feature-packed episode ends with an overview of the access model in Docker, and explains why you need to carefully control who has access to your machines.

Episode 15: Building Docker Images that Run Anywhere: Linux, Windows, Intel & Arm

Every exercise in the book uses Docker images which are built to run on any of the main computing architectures - Windows and Linux operating systems, Intel and Arm CPUs - so you can follow along whatever computer you're using. In this episode you'll find out how that works, with multi-architecture images. A multi-arch image is effectively one image tag which has multiple variants:

LEARN DOCKER IN ONE MONTH! Your week 3 guide.

There are two ways to create multi-arch images: build and push all the variants yourself and then push a manifest to the registry which describes the variants, or have the new Docker buildx plugin do it all for you. The episode covers both options with lots of examples and shows you the benefits and limitations of each.

You'll also learn why multi-arch images are important (example: you could cut your cloud bills almost in half by running containers on Arm instead of Intel on AWS), and the Dockerfile best practices for supporting multi-arch builds.

Coming next

Week 3 covered orchestration and some best practices for production deployments. Week 4 is the final week and the theme is production readiness. You'll learn everything you need to take a Docker proof-of-concept project into production, including optimizing your images, managing app configuration and logging, and controlling traffic to your application with a reverse proxy.

The live stream is running through September 2020 and kicks off on my YouTube channel weekdays at 19:00 UTC. The episodes are available to watch on demand as soon as the session ends.

Hope you can join me on the final leg of your journey to learn Docker in one month :)

Learn Docker in *ONE MONTH*. Your guide to week 2.

13 septembre 2020 à 20:07
Learn Docker in *ONE MONTH*. Your guide to week 2.

I've added five more episodes to my YouTube series Learn Docker in a Month of Lunches. You can find the overview at https://diamol.net and the theme for week 2 is:

Running distributed applications in containers

This follows a tried-and-trusted path for learning Docker which I've used for workshops and training sessions over many years. Week 1 is all about getting used to Docker and the key container concepts. Week 2 is about understanding what Docker enables you to do, focusing on multi-container apps with Docker Compose.

Episode 6: Running multi-container apps with Docker Compose

Docker Compose is a tool for modelling and managing containers. You model your apps in a YAML file and use the Compose command line to start and stop the whole app or individual components.

We start with a nice simple docker-compose.yml file which models a single container plugged into a Docker network called nat:

version: '3.7'

services:

  todo-web:
    image: diamol/ch06-todo-list
    ports:
      - "8020:80"
    networks:
      - app-net

networks:  
  app-net:
    external:
      name: nat

The services section defines a single component called todo-web which will run in a container. The configuration for the service includes the container image to use, the ports to publish and the Docker network to connect to.

Docker Compose files effectively capture all the options you would put in a docker run command, but using a declarative approach. When you deploy apps with Compose it uses the spec in the YAML file as the desired state. It looks at the current state of what's running in Docker and creates/updates/removes objects (like containers or networks) to get to the desired state.

Here's how to run that app with Docker Compose:

# in this example the network needs to exist first:
docker network create nat

# compose will create the container:
docker-compose up  

In the episode you'll see how to build on that, defining distributed apps which run across multiple containers in a single Compose file and exploring the commands to manage them.

You'll also learn how you can inject configuration settings into containerized apps using Compose, and you'll understand the limitations of what Compose can do.

Episode 7: Supporting reliability with health checks and dependency checks

Running your apps in containers unlocks new possibilities, like scaling up and down on demand and spreading your workload across a highly-available cluster of machines. But a distributed architecture introduces new failure modes too, like slow connections and timeouts from unresponsive services.

Docker lets you build reliability into your container images so the platform you use can understand if your applications are healthy and take corrective action if they're not. That gets you on the path to self-healing applications, which manage themselves through transient failures.

The first part of this is the Docker HEALTHCHECK instruction which lets you configure Docker to test if your application inside the container is healthy - here's the simplest example in a Dockerfile:

# builder stage omitted in this snippet
FROM diamol/dotnet-aspnet

ENTRYPOINT ["dotnet", "/app/Numbers.Api.dll"]  
HEALTHCHECK CMD curl --fail http://localhost/health

WORKDIR /app  
COPY --from=builder /out/ .  

This is a basic example which uses curl - I've already written about why it's a bad idea to use curl for container healthchecks and you'll see in this episode the better practice of using the application runtime for your healthcheck.

When a container image has a healthcheck specified, Docker runs the command inside the container to see if the application is healthy. If it's unhealthy for a number of successive checks (the default is three) the Docker API raises an event. Then the container platform can take corrective action like restarting the container or removing it and replacing it.

This episode also covers dependency checks, which you can use in your CMD or ENTRYPOINT instruction to verify your app has all the dependencies it needs before it starts. This is useful in scenarios where components can't do anything useful if they're missing dependencies - but without the check it the container would start and it would look as if everything was OK.

Episode 8: Adding observability with containerized monitoring

Healthchecks and dependency checks get you a good way to reliability, but you also need to see what's going on inside your containers for situations where things go wrong in unexpected ways.

One of the big issues for ops teams moving from VMs to containers is going from a fairly static environment with a known set of machines to monitor, to a dynamic environment where containers appear and disappear all the time.

This episode introduces the typical monitoring stack for containerized apps using Prometheus. In this architecture all your containers expose metrics in an HTTP endpoint, as do your Docker servers. Prometheus runs in a container too and it collects those metrics and stores them in a time-series database.

Learn Docker in *ONE MONTH*. Your guide to week 2.

You need to add metrics to your app using a Prometheus client library, which will provide a set of runtime metrics (like memory and CPU usage) for free. The client library also gives you a simple way to capture your own metrics.

The demo apps for this module have components in .NET, Go, Java and Node.js so you can see how to use client libraries in different languages and wire them up to Prometheus.

You'll learn how to run a monitoring solution in containers alongside your application, all modelled in Docker Compose. One of the great benefits of containerized monitoring is that you can run the same tools in every environment - so developers can use the same Grafana dashboard that ops use in production.

Episode 9: Running multiple environments with Docker Compose

Docker is great for density - running lots of containers on very little hardware. You particularly see that for non-production environments where you don't need high availability and you don't have a lot of traffic to deal with.

This episode shows you how to run multiple environments - different configurations of the same application - on a single server. It covers more advanced Compose topics like override files and extension fields.

You'll also learn how to apply configuration to your apps with different approaches in the Compose file, like this docker-compose.yml example:

version: "3.7"

services:  
  todo-web:
    ports:
      - 8089:80
    environment:
      - Database:Provider=Sqlite
    env_file:
      - ./config/logging.debug.env

secrets:  
  todo-db-connection:
    file: ./config/empty.json

The episode has lots of examples of how you can use Compose to model different configurations of the same application, while keeping your Compose files clean and easy to manage.

Episode 10: Building and testing applications with Docker and Docker Compose

Containers make it easy to build a Continuous Integration pipeline where every component runs in Docker and you can dispense with build servers that need careful nurturing.

This epsiode shows you how to build a simple pipeline using techniques you learned in earlier episodes - like multi-stage Dockerfiles - to keep your CI process portable and maintainable.

You'll see how to run a complete build infrastructure in containers, using Gogs as the Git server, Jenkins to trigger the builds, and a local Docker registry in a container. The exercises focus on the patterns rather than the individual tools, so all the setup is done for you.

The easy way to keep your pipeline definitions clean is to use Docker Compose to model the build workflow as well as the runtime spec. This docker-compose-build.yml file is an override file which isolates the build settings, and uses variables and extension fields to reduce duplication:

version: "3.7"

x-args: &args  
  args:
    BUILD_NUMBER: ${BUILD_NUMBER:-0}
    BUILD_TAG: ${BUILD_TAG:-local}

services:  
  numbers-api:
    build:
      context: numbers
      dockerfile: numbers-api/Dockerfile.v4
      <<: *args

  numbers-web:
    build:
      context: numbers
      dockerfile: numbers-web/Dockerfile.v4
      <<: *args

Of course you're more likely to use managed services like GitHub and Azure DevOps, but the principle is the same - keep all the logic in your Dockerfiles and your Docker Compose files, and all you need from your service provider is Docker. That makes it super easy to migrate between providers without rewriting all your build scripts.

This episode also covers the secure software supply chain, extending your pipeline to include security scanning and signing so you can be sure the containers you run in production are safe.

Coming next

Week 2 covered multi-container apps, and in week 3 we move on to orchestration. We'll use Docker Swarm which is the production-grade orchestrator built into Docker. It's simpler than Kubernetes (which needs it's own series - Learn Kubernetes in a Month of Lunches will be televised in 2021), and it uses the now-familiar Docker Compose specification to model apps.

You can always find the upcoming episode at diamol.net/stream and there are often book giveaways at diamol.net/giveaway.

The live stream is running through September 2020 and kicks off on Elton Stoneman's YouTube channel weekdays at 19:00 UTC. The episodes are available to watch on demand as soon as the session ends.

Hope you can join me and continue to make progress in your Docker journey :)

Learn Docker in one month! Your guide to week 1

6 septembre 2020 à 19:38
Learn Docker in one month! Your guide to week 1

I'm streaming every chapter of my new book Learn Docker in a Month of Lunches on YouTube, and the first week's episodes are out now.

Here's the Learn Docker in a Month of Lunches playlist.

The book is aimed at new and improving Docker users, it starts from the basics - with best practices built in - and moves on to more advanced topics like production readiness, orchestration, observability and HTTP routing.

It's a hands-on introduction to Docker, and the learning path is one I've honed from teaching Docker and Kubernetes at conference workshops and at clients for many years. Every exercise is built to work on Mac, Windows and Arm machines so you can follow along with whatever tech you like.

Episode 1: Understanding Docker and running Hello, World

You start by learning what a container is - a virtualized environment around the processes which make up an application. The container shares the OS kernel of the machine it's running on, which makes Docker super efficient and lightweight.

The very first exercise gets you to run a simple app in a container to see what the virtual environment looks like (all you need to follow along is Docker):

docker container run diamol/ch02-hello-diamol  

That container just prints some information and exits. In the rest of the episode (which covers chapters 1 & 2 of the book), you'll learn about different ways to run containers, and how containers are different from other types of virtualization.

Episode 2: Building your own Docker images

You package your application into an image so you can run it in containers. All the exercises so far use images which I've already built, and this chapter introduces the Dockerfile syntax and shows you how to build your own images.

An important best practice is to make your container images portable - so in production you use the exact same Docker image that you've tested and approved in other environments. That means no gaps in the release, the deployment is the same set of binaries that you've successfully deployed in test.

Portable images need to be able to read configuration from the environment, so you can tweak the behaviour of your apps even though the image is the same. You'll run an exercise like this which shows you how to inject configuration settings using environment variables:

docker container run --env TARGET=google.com diamol/ch03-web-ping  

Watch the episode to learn how that works, and to understand how images are stored as layers. That affects build speeds, image size and the security profile of your app, so it's fundamental to understanding about image optimization.

Episode 3: Packaging apps from source code into Docker images

The Dockerfile syntax is pretty simple and you can use it to copy binaries from your machine into the container image, or download and extract archives from a web address.

But things get more interesting with multi-stage Dockerfiles which you can use to compile applications in from source code using Docker. The exercises in this chapter use Go, Java and Node.js - and you don't need any of those runtimes installed on your machine because all the tools run inside containers.

Here's a sample Dockerfile for a Java app built with Maven:

FROM diamol/maven AS builder

WORKDIR /usr/src/iotd  
COPY pom.xml .  
RUN mvn -B dependency:go-offline

COPY . .  
RUN mvn package

# app
FROM diamol/openjdk

WORKDIR /app  
COPY --from=builder /usr/src/iotd/target/iotd-service-0.1.0.jar .

EXPOSE 80  
ENTRYPOINT ["java", "-jar", "/app/iotd-service-0.1.0.jar"]  

All the tools to download libraries, compile and package the app are in the SDK image - using Maven in this example. The final image is based on a much smaller image with just the Java runtime installed and none of the additional tools.

This approach is supported in all the major languages and it effectively means you can use Docker as your build server and everyone in the team has the exact same toolset because everyone uses the same images.

Episode 4: Sharing images with Docker Hub and other registries

Building your own images means you can run your apps in containers, but if you want to make them available to other people you need to share them on a registry like Docker Hub.

This chapter teaches you about image references and how you can use tags to version your applications. If you've only ever used the latest tag then you should watch this one to understand why that's a moving target and explicit version tags are a much better approach.

You'll push images to Docker Hub in the exercises (you can sign up for a free account with generous usage levels) and you'll also learn how to run your own registry server in a container with a simple command like this:

docker container run -d -p 5000:5000 --restart always diamol/registry  

It's usually better to use a managed registry like Docker Hub or Azure Container Registry but it's useful to know how to run a registry in your own organization. It can be a simple backup plan if your provider has an outage or you lose Internet connectivity.

This chapter also explains the concept of golden images which your organization can use to ensure all your apps are running from an approved set of base images, curated by an infrastructure or security team.

Episode 5: Using Docker volumes for persistent storage

Containers are great for stateless apps, and you can run apps which write data in containers too - as long as you understand where the data goes. This episode walks you through the container filesystem so you can see how the disk which the container sees is actually composed from multiple sources.

Persisting state is all about separating the lifecycle of the data from the lifecycle of the container. When you update your apps in production you'll delete the existing container and replace it with a new one from the new application image. You can attach the storage from the old container to the new one so all the data is there.

You'll learn how to do that with Docker volumes and with bind mounts, in exercises which use a simple to-do list app that stores data in a Sqlite database file:

docker container run --name todo1 -d -p 8010:80 diamol/ch06-todo-list

# browse to http://localhost:8010

There are some limitations to mounting external data sources into the container filesystem which you'll learn all about in the chapter.

Coming next

Week 1 covers the basics of Docker: containers, images, registries and storage. Week 2 looks at running multi-container apps, introducing Docker Compose to manage multiple containers and approaches to deal with distributed applications - including monitoring and healthchecks.

You can always find the upcoming episode at diamol.net/stream and there are often book giveaways at diamol.net/giveaway.

The live stream is running through September 2020 and kicks off on Elton Stoneman's YouTube channel weekdays at 19:00 UTC. The episodes are available to watch on demand as soon as the session ends.

Hope you can join me and make progress in your Docker journey :)

❌
❌