Introduction

In the preceding chapters, you’ve mastered the art of running individual Docker containers and managing them on a single host. However, real-world applications often require multiple containers working together, needing high availability, scalability, and load balancing across several machines. This is where container orchestration comes into play. Orchestration automates the deployment, management, scaling, and networking of containers.

Docker Swarm is Docker’s native solution for orchestrating containers. It turns a pool of Docker hosts into a single, virtual Docker host, allowing you to deploy and manage applications as a collection of services. This chapter will delve into the fundamentals of Docker Swarm, guiding you through setting up a swarm, deploying services, and managing their lifecycle.

Main Explanation

Docker Swarm is a clustering and scheduling tool for Docker containers. It allows you to create and manage a cluster of Docker nodes as a single virtual Docker Engine.

Key Concepts in Docker Swarm

  • Node: A Docker Engine instance participating in the Swarm. Nodes can be:
    • Manager Node: Handles cluster management tasks, maintains the Swarm state using Raft consensus, and dispatches tasks to worker nodes. A Swarm can have multiple manager nodes for high availability, but only one is the leader at any given time.
    • Worker Node: Runs the actual services (containers). Worker nodes receive and execute tasks from manager nodes.
  • Service: A definition of the tasks to be executed on the Swarm. It defines which Docker image to use, the command to run, exposed ports, replica count, and more. Services can be:
    • Replicated Service: Runs a specified number of identical tasks across the Swarm.
    • Global Service: Runs exactly one task on every available node in the Swarm.
  • Task: A running instance of a service. When a service is deployed or scaled, the Swarm manager creates tasks. A task is essentially a container with a specific configuration.
  • Stack: A collection of related services, defined by a single docker-compose.yml file, deployed together as a single application.
  • Overlay Network: A network that spans across multiple Docker hosts, allowing containers on different machines to communicate seamlessly as if they were on the same host. Swarm mode automatically creates an overlay network for services.
  • Raft Consensus: The algorithm used by Swarm manager nodes to maintain a consistent state across the cluster. This ensures that all managers agree on the current state of the Swarm.

Initializing a Docker Swarm

To start a Swarm, you initialize it on a Docker host, which then becomes the first manager node.

docker swarm init --advertise-addr <MANAGER-IP>

The --advertise-addr specifies the IP address that other nodes will use to connect to this manager.

Adding Nodes to a Swarm

Once a Swarm is initialized, you can add more manager or worker nodes.

  • Adding a Worker Node: The docker swarm init command outputs a docker swarm join command for worker nodes.
    docker swarm join --token <WORKER-TOKEN> <MANAGER-IP>:<PORT>
    
  • Adding a Manager Node: Similarly, you can retrieve a command to add a new manager.
    docker swarm join --token <MANAGER-TOKEN> <MANAGER-IP>:<PORT>
    

Deploying Services

Services are the core building blocks of applications in Swarm. You define a service, and the Swarm manager ensures the desired state is maintained.

docker service create --name my-web-app -p 80:80 --replicas 3 nginx:latest

This command creates a service named my-web-app running 3 replicas of the nginx:latest image, accessible on port 80.

Scaling Services

You can easily scale a service up or down to meet demand.

docker service scale my-web-app=5

This scales the my-web-app service to 5 replicas.

Updating Services

Rolling updates allow you to update your application without downtime.

docker service update --image nginx:1.23.0 my-web-app

This updates the my-web-app service to use nginx:1.23.0. Swarm will gradually replace old containers with new ones.

Deploying Stacks

For multi-service applications, you use docker-compose.yml files (often renamed to docker-stack.yml in Swarm context) to define your services and then deploy them as a stack.

docker stack deploy -c docker-compose.yml my-app-stack

Managing the Swarm

  • Listing Nodes: docker node ls
  • Listing Services: docker service ls
  • Listing Tasks: docker service ps <SERVICE-NAME>
  • Removing a Service: docker service rm <SERVICE-NAME>
  • Removing a Stack: docker stack rm <STACK-NAME>
  • Leaving a Swarm: docker swarm leave (on a worker) or docker swarm leave --force (on a manager)

Examples

Let’s walk through setting up a simple Docker Swarm and deploying an Nginx service.

Prerequisites: You’ll need at least two machines (physical or virtual) with Docker Engine 29.0.2 installed. For this example, let’s assume 192.168.1.10 is your manager and 192.168.1.11 is your worker.

1. Initialize the Swarm (on 192.168.1.10)

First, initialize the Swarm on your manager node.

docker swarm init --advertise-addr 192.168.1.10

You will see output similar to this, including the join command for workers:

Swarm initialized: current node (c6d8e...) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-3k3... 192.168.1.10:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

2. Add a Worker Node (on 192.168.1.11)

Copy the docker swarm join command from the manager’s output and run it on your worker node (192.168.1.11).

docker swarm join --token SWMTKN-1-3k3... 199.168.1.10:2377

You should see: This node joined a swarm as a worker.

3. Verify Swarm Nodes (on 192.168.1.10)

Back on your manager node, check the list of nodes in your Swarm.

docker node ls

Output:

ID                            HOSTNAME            STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
c6d8e... *                      manager1            Ready     Active         Leader           29.0.2
e1f2g...                      worker1             Ready     Active                          29.0.2

4. Deploy an Nginx Service

Now, let’s deploy a replicated Nginx service with 3 replicas.

docker service create --name my-nginx-service -p 80:80 --replicas 3 nginx:latest

You should see output like: t8u9v... (the service ID).

5. Inspect the Service

Check the status of your deployed service.

docker service ls

Output:

ID             NAME                MODE         REPLICAS   IMAGE          PORTS
t8u9v...       my-nginx-service    replicated   3/3        nginx:latest   *:80->80/tcp

You can also see which tasks are running on which nodes:

docker service ps my-nginx-service

Output (may vary depending on scheduling):

ID             NAME                IMAGE          NODE                DESIRED STATE   CURRENT STATE         ERROR     PORTS
p0q1r...       my-nginx-service.1  nginx:latest   manager1            Running         Running 5 seconds ago
s2t3u...       my-nginx-service.2  nginx:latest   worker1             Running         Running 5 seconds ago
v4w5x...       my-nginx-service.3  nginx:latest   worker1             Running         Running 5 seconds ago

Now, you can access Nginx by navigating to http://192.168.1.10 or http://192.168.1.11 in your browser. Docker Swarm’s routing mesh (ingress) will distribute traffic to the available Nginx containers.

6. Scale the Service

Let’s scale the Nginx service to 5 replicas.

docker service scale my-nginx-service=5

Check the service status again:

docker service ls

You’ll see 5/5 under REPLICAS.

docker service ps my-nginx-service

You’ll now see 5 tasks running across your nodes.

7. Deploy a Multi-Service Application (Stack)

Create a docker-compose.yml file (e.g., web-app.yml):

version: '3.8'
services:
  web:
    image: containous/whoami
    ports:
      - "8080:80"
    deploy:
      replicas: 2
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: on-failure
  visualizer:
    image: dockersamples/visualizer
    ports:
      - "8081:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      placement:
        constraints: [node.role == manager]

Deploy this stack:

docker stack deploy -c web-app.yml my-web-stack

Verify the services:

docker stack services my-web-stack

You can now access the whoami service on port 8080 of any node, and the visualizer on port 8081 of the manager node.

8. Clean Up

Remove the services and stack.

docker service rm my-nginx-service
docker stack rm my-web-stack

Finally, if you want to completely tear down the Swarm: On the worker node:

docker swarm leave

On the manager node:

docker swarm leave --force

Mini Challenge

Deploy a simple voting application using Docker Swarm. The application consists of three services:

  1. vote: A Python web app (image: dockersamples/examplevotingapp_vote)
  2. redis: A Redis database (image: redis:latest)
  3. worker: A .NET worker app (image: dockersamples/examplevotingapp_worker)
  4. db: A PostgreSQL database (image: postgres:latest)
  5. result: A Node.js web app (image: dockersamples/examplevotingapp_result)

Your task is to:

  • Create a docker-compose.yml file for these services.
  • Deploy this application as a stack named voting-app to your Docker Swarm.
  • Ensure the vote and result services are accessible on ports 5000 and 5001 respectively.
  • Scale the vote service to 4 replicas after deployment.
  • Verify all services are running and accessible.

Summary

Docker Swarm provides a straightforward and integrated way to orchestrate Docker containers. By understanding key concepts like nodes, services, and stacks, you can effectively deploy scalable, highly available applications across a cluster of machines. Swarm mode simplifies the complexities of distributed systems, offering built-in features like load balancing, rolling updates, and service discovery, making it an excellent choice for many containerized workloads. With Docker Swarm, you move beyond single-host container management to robust, production-ready application deployments.