Welcome back, intrepid container explorer! So far, we’ve mastered the art of running single containers, crafting custom images, and managing persistent data. You’re practically a Docker wizard! But what if your application isn’t just one lonely container? What if it needs a database, a backend API, a frontend, and maybe a caching service, all working together in perfect sync? Trying to manage all those docker run commands manually would be like trying to conduct an orchestra by shouting instructions at each musician individually — chaotic and prone to error!
That’s where Docker Compose steps in. In this chapter, we’re going to learn how to orchestrate multiple containers, defining and running them as a single, cohesive application using a simple YAML file. Think of Docker Compose as your personal conductor, ensuring every part of your application plays its role beautifully and starts up in harmony. By the end of this chapter, you’ll be able to spin up complex development environments with a single command, making your life as a developer much, much easier.
Before we dive in, make sure you’re comfortable with the concepts we covered in previous chapters: Docker Images, Containers, Volumes, and Networks. We’ll be building on that foundation to create truly powerful multi-service applications. Ready to make some music with Docker Compose? Let’s go!
Core Concepts: Your Orchestral Score
Imagine building a house. You don’t just throw bricks and wood together randomly. You have blueprints, a plan that specifies where each wall goes, where the plumbing runs, and how the electricity connects. Docker Compose gives you a similar “blueprint” for your multi-container applications.
What is Docker Compose?
Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file (usually named docker-compose.yml or compose.yaml) to configure your application’s services. Then, with a single command, you can create and start all the services from your configuration.
Why is it so powerful?
- Simplifies Complex Applications: Instead of juggling multiple
docker runcommands with all their flags for ports, volumes, and networks, you define everything once in a file. - Reproducible Environments: Your entire development, testing, and even staging environment can be described in this file, ensuring everyone on your team (and your CI/CD pipeline) runs the exact same setup.
- Easy Management: Start, stop, rebuild, and check the status of your entire application stack with intuitive commands.
docker compose vs. docker-compose (A Quick Modern Update!)
You might encounter tutorials or older documentation that refer to docker-compose (with a hyphen). This was the original, standalone Python-based tool (Compose V1).
As of December 2025, the standard and recommended way is to use docker compose (without a hyphen). This is the Compose V2 plugin, which is integrated directly into the Docker CLI. It’s faster, more robust, and uses the same command structure as other Docker CLI commands. So, when you see docker compose in this guide, know we’re using the modern, integrated version!
The docker-compose.yml File: Your Blueprint
The heart of Docker Compose is the docker-compose.yml file. This YAML file describes your application’s services, networks, and volumes. Let’s peek at its general structure. Don’t worry, we’ll break down each part!
# docker-compose.yml (Conceptual Structure)
version: '3.8' # The Compose file format version
services:
# Define your individual services (containers) here
web_app:
image: my-custom-app:latest
build: .
ports:
- "80:8000"
environment:
DATABASE_URL: postgres://user:password@db/mydatabase
depends_on:
- db
networks:
- app_network
db:
image: postgres:16.1-alpine # Latest stable as of Dec 2025 for Alpine
environment:
POSTGRES_DB: mydatabase
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- db_data:/var/lib/postgresql/data
networks:
- app_network
networks:
app_network:
driver: bridge # Default, often implicitly created
volumes:
db_data: # A named volume for persistent database data
Key Sections Explained:
version: Specifies the Compose file format version. Using3.8(or higher, like3.9) is a good practice as of 2025, providing access to the latest features.services: This is where you define each individual container (or “service”) that makes up your application. Each service entry has a name (likeweb_appordb) and its own configuration.image: Specifies the Docker image to use for this service (e.g.,postgres:16.1-alpine).build: If you have aDockerfilefor a service,build: .tells Compose to build an image from theDockerfilein the current directory. You can also specify a path to aDockerfileor a build context.ports: Maps ports from the host machine to the container (e.g.,"80:8000"maps host port 80 to container port 8000).environment: Sets environment variables inside the container. Crucial for configuration (like database credentials).volumes: Mounts host paths or named volumes into the container for persistent data or sharing files.networks: Connects a service to specific networks. Compose automatically creates a default network, but custom networks are best practice.depends_on: Declares dependencies between services. For example,web_appdepends_ondbmeans the database container will start before the web app container. Important note:depends_ononly ensures startup order, not that the dependent service is ready. For true readiness, health checks are used (more advanced, but good to know for later!).
networks: Defines custom networks for your services. This allows containers to communicate with each other using their service names.volumes: Defines named volumes, which are the recommended way to persist data generated by Docker containers.
Don’t worry if all of this seems like a lot at once. We’re going to build a real application step-by-step, and you’ll see how each piece fits together!
Step-by-Step Implementation: Building Our First Multi-Container App
Let’s put theory into practice! We’ll create a simple “Hello, Docker Compose!” application. It will consist of:
- A Python Flask web application (our
webservice). - A PostgreSQL database (our
dbservice).
Our Flask app will connect to the PostgreSQL database to store and retrieve a simple message.
1. Project Setup
First, let’s create a new directory for our project and navigate into it.
mkdir my-compose-app
cd my-compose-app
2. The Python Flask Application (app.py)
Create a file named app.py inside your my-compose-app directory. This will be our simple web application.
# my-compose-app/app.py
import os
import time
import psycopg2
from flask import Flask, render_template_string
app = Flask(__name__)
# Environment variables for database connection
DB_HOST = os.environ.get('DB_HOST', 'db') # 'db' is the service name in docker-compose.yml
DB_NAME = os.environ.get('POSTGRES_DB', 'mydatabase')
DB_USER = os.environ.get('POSTGRES_USER', 'user')
DB_PASSWORD = os.environ.get('POSTGRES_PASSWORD', 'password')
def get_db_connection():
"""Establishes a connection to the PostgreSQL database."""
conn = None
retries = 5
while retries > 0:
try:
conn = psycopg2.connect(
host=DB_HOST,
database=DB_NAME,
user=DB_USER,
password=DB_PASSWORD
)
print("Successfully connected to the database!")
return conn
except psycopg2.OperationalError as e:
print(f"Database connection failed: {e}. Retrying in 5 seconds...")
retries -= 1
time.sleep(5)
raise Exception("Could not connect to the database after multiple retries.")
@app.route('/')
def hello_world():
conn = get_db_connection()
cur = conn.cursor()
# Create table if it doesn't exist
cur.execute("""
CREATE TABLE IF NOT EXISTS messages (
id SERIAL PRIMARY KEY,
content VARCHAR(255) NOT NULL
)
""")
conn.commit()
# Insert a message if table is empty
cur.execute("SELECT COUNT(*) FROM messages")
if cur.fetchone()[0] == 0:
cur.execute("INSERT INTO messages (content) VALUES ('Hello from Docker Compose and PostgreSQL!')")
conn.commit()
# Retrieve message
cur.execute("SELECT content FROM messages ORDER BY id DESC LIMIT 1")
message = cur.fetchone()[0]
cur.close()
conn.close()
html_content = f"""
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Docker Compose App</title>
<style>
body {{ font-family: Arial, sans-serif; text-align: center; margin-top: 50px; background-color: #f4f4f4; }}
h1 {{ color: #333; }}
p {{ color: #555; font-size: 1.2em; }}
.container {{ background-color: #fff; padding: 30px; border-radius: 8px; box-shadow: 0 2px 4px rgba(0,0,0,0.1); display: inline-block; }}
</style>
</head>
<body>
<div class="container">
<h1>Welcome to Your Multi-Container App!</h1>
<p>Message from the database: <strong>{message}</strong></p>
<p>This page is served by a Flask app, connected to a PostgreSQL database, all orchestrated by Docker Compose!</p>
</div>
</body>
</html>
"""
return render_template_string(html_content)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Explanation:
- This is a simple Flask application that tries to connect to a PostgreSQL database.
- It uses environment variables (
DB_HOST,POSTGRES_DB, etc.) to get database credentials. This is a common and flexible way for containers to receive configuration. - The
get_db_connectionfunction includes a simple retry mechanism, which is helpful because the database might take a moment longer to start up than the web app. - On the
/route, it connects to the DB, creates amessagestable if it doesn’t exist, inserts a default message if the table is empty, and then retrieves and displays that message. app.run(host='0.0.0.0', port=5000)makes the Flask app accessible from outside its container (which will be important for port mapping later).
3. Application Dependencies (requirements.txt)
Our Python app needs Flask and psycopg2 (a PostgreSQL adapter). Create requirements.txt in the same directory:
# my-compose-app/requirements.txt
Flask==3.0.3 # Latest stable as of Dec 2025
psycopg2-binary==2.9.9 # Latest stable as of Dec 2025
Explanation:
- This file lists the Python packages our Flask app needs. When our
Dockerfilebuilds the image, it will install these.
4. Dockerfile for the Web App
Now, let’s create a Dockerfile for our Flask application. This will be placed in the my-compose-app directory.
# my-compose-app/Dockerfile
# Use a lightweight official Python image
FROM python:3.11.7-slim-bullseye # Latest stable 3.11 as of Dec 2025
# Set the working directory in the container
WORKDIR /app
# Copy the requirements file and install dependencies first
# This allows Docker to cache this layer if requirements.txt doesn't change
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code
COPY . .
# Expose the port the Flask app runs on
EXPOSE 5000
# Command to run the application
CMD ["python", "app.py"]
Explanation:
FROM python:3.11.7-slim-bullseye: We start with a slim Python image for efficiency.WORKDIR /app: Sets the default directory inside the container.COPY requirements.txt .andRUN pip install: Copies dependencies and installs them. This is done early to leverage Docker’s build cache.COPY . .: Copies the rest of our application code (app.py) into the container.EXPOSE 5000: Informs Docker that the container listens on port 5000 at runtime.CMD ["python", "app.py"]: Specifies the command to run when the container starts.
5. The Docker Compose File (docker-compose.yml)
Finally, the star of the show! Create docker-compose.yml in your my-compose-app directory.
# my-compose-app/docker-compose.yml
version: '3.9' # Using the latest recommended Compose file format version
services:
web:
build: . # Build the image using the Dockerfile in the current directory
ports:
- "80:5000" # Map host port 80 to container port 5000
environment:
# These environment variables are passed to our Flask app
POSTGRES_DB: mydatabase
POSTGRES_USER: user
POSTGRES_PASSWORD: password
DB_HOST: db # Use the service name 'db' for the database host
depends_on:
- db # Ensures 'db' service starts before 'web' service
networks:
- app-network # Connects 'web' service to 'app-network'
db:
image: postgres:16.1-alpine # Use a lightweight PostgreSQL image (latest stable Alpine for 16.1 as of Dec 2025)
environment:
# These environment variables configure the PostgreSQL database
POSTGRES_DB: mydatabase
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- db-data:/var/lib/postgresql/data # Mount a named volume for persistent data
networks:
- app-network # Connects 'db' service to 'app-network'
networks:
app-network:
driver: bridge # Explicitly define a bridge network
volumes:
db-data: # Define the named volume for PostgreSQL data
Explanation (line-by-line):
version: '3.9': We specify the Compose file format version.3.9is a recent and robust version as of December 2025.services:: This section defines all the individual containers that make up our application.web:: This is the name of our first service, representing our Flask web application.build: .: Tells Docker Compose to look for aDockerfilein the current directory (.) and build an image from it for this service.ports: - "80:5000": This is a crucial port mapping. It means that requests coming into port80on your host machine (your computer) will be forwarded to port5000inside thewebcontainer, where our Flask app is listening.environment:: This block sets environment variables inside thewebcontainer. Ourapp.pyuses these to connect to the database.POSTGRES_DB: mydatabase: The name of the database to connect to.POSTGRES_USER: user: The username for the database.POSTGRES_PASSWORD: password: The password for the database user.DB_HOST: db: This is key! Inside the Docker network, containers can refer to each other by their service names. So,dbrefers to ourdbservice.
depends_on: - db: This tells Docker Compose to ensure thedbservice is started before thewebservice. Remember, this is for startup order, not full readiness. Ourapp.pyhas a retry loop to handle the database taking a bit longer.networks: - app-network: Connects ourwebservice to a custom network we’ll define calledapp-network. This allowswebanddbto communicate.
db:: This is the name of our second service, representing our PostgreSQL database.image: postgres:16.1-alpine: We’re using the official PostgreSQL image, specifically version16.1with thealpinetag for a smaller image size. This is the latest stable version 16 branch as of December 2025.environment:: Sets environment variables for the PostgreSQL container itself. These are standard PostgreSQL environment variables for initial setup.POSTGRES_DB: mydatabase: Creates a database namedmydatabasewhen the container first starts.POSTGRES_USER: user: Creates a user nameduser.POSTGRES_PASSWORD: password: Sets the password foruser.
volumes: - db-data:/var/lib/postgresql/data: This is crucial for data persistence!db-datarefers to a named volume that we define in thevolumessection below. Docker manages this volume./var/lib/postgresql/datais the default directory inside the PostgreSQL container where it stores its data. By mountingdb-datahere, our database data will persist even if thedbcontainer is removed or recreated.
networks: - app-network: Connects ourdbservice to the sameapp-networkas ourwebservice.
networks:: This section defines custom networks.app-network:: We’re defining a network namedapp-network.driver: bridge: Specifies that it’s a standard bridge network. This is the default, but explicitly defining it is good practice.
volumes:: This section defines named volumes.db-data:: Defines a named volume calleddb-data. Docker will create and manage this volume on your host system.
6. Bringing it All to Life!
Now that our blueprint is complete, let’s start our application! Make sure you are in the my-compose-app directory.
docker compose up
Explanation:
docker compose up: This command reads yourdocker-compose.ymlfile, builds the necessary images (ifbuildis specified), creates the networks and volumes, and then starts all the services defined in the file.- You’ll see a lot of output as Docker Compose pulls images, builds your
webimage, and starts both containers. - The
-dflag (docker compose up -d) can be used to run the containers in “detached” mode (in the background), so your terminal is free. For now, let’s keep it in the foreground to see the logs.
Once the command finishes and you see messages indicating your Flask app is running, open your web browser and navigate to http://localhost.
You should see a cheerful message: “Hello from Docker Compose and PostgreSQL!” This confirms that your Flask app is running, successfully connected to the PostgreSQL database, and retrieved data from it – all orchestrated by Docker Compose!
7. Managing Your Compose Application
While your app is running (if not detached), open a new terminal window in the my-compose-app directory.
View Running Services:
docker compose psThis command shows you the status of all services defined in your
docker-compose.ymlfile. You should seewebanddblisted asUp.View Logs:
docker compose logsThis shows the combined logs from all your services.
docker compose logs webThis shows only the logs from the
webservice.docker compose logs -f dbThe
-fflag “follows” the logs, showing new output as it appears, useful for debugging.Stop and Remove Services: When you’re done, you can stop and remove all the containers, networks, and volumes (unless explicitly told not to) defined in your
docker-compose.ymlwith a single command:docker compose downExplanation:
- This command stops the running services and removes the containers and the default network created by Compose.
- Important: By default, it does not remove named volumes (like
db-data). This is a good thing, as it preserves your database data betweenupanddowncycles! If you do want to remove volumes (e.g., for a fresh start), add the-vflag:docker compose down -v.
Go ahead and try docker compose down. Then, run docker compose up again, refresh your browser, and you’ll see the message is still there! This is because our db-data volume persisted the database state. How cool is that?
Mini-Challenge: Adding a Redis Cache!
You’ve successfully built a two-service application. Now, let’s add another component to our orchestra!
Challenge: Extend our docker-compose.yml file to include a Redis caching service.
- Add a new service named
cachethat uses theredis:7.2.4-alpineimage (latest stable Alpine for 7.2 as of Dec 2025). - Ensure the
cacheservice is connected to ourapp-network. - Modify your
app.py(or just imagine you would) to use this Redis cache by adding an environment variableREDIS_HOSTto thewebservice, pointing to thecacheservice name. You don’t need to implement the actual caching logic inapp.pyfor this challenge, just set up the Compose file correctly.
Hint:
- You’ll need a new entry under
services:. - The
image:directive will be key for Redis. - Don’t forget to connect it to the
app-network! - For
REDIS_HOST, think about howDB_HOSTwas set.
What to observe/learn:
- How easy it is to add new services to an existing Compose file.
- How services communicate via their names on a shared network.
- Reinforce the
services,image,environment, andnetworksdirectives.
Click for Solution (after you've tried it!)
# my-compose-app/docker-compose.yml (Solution with Redis)
version: '3.9'
services:
web:
build: .
ports:
- "80:5000"
environment:
POSTGRES_DB: mydatabase
POSTGRES_USER: user
POSTGRES_PASSWORD: password
DB_HOST: db
REDIS_HOST: cache # New environment variable for Redis host
depends_on:
- db
- cache # Now also depends on cache
networks:
- app-network
db:
image: postgres:16.1-alpine
environment:
POSTGRES_DB: mydatabase
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- db-data:/var/lib/postgresql/data
networks:
- app-network
cache: # New Redis service
image: redis:7.2.4-alpine # Latest stable Alpine for 7.2 as of Dec 2025
networks:
- app-network # Connect to the same network
networks:
app-network:
driver: bridge
volumes:
db-data:
After updating docker-compose.yml, run docker compose up again. You should see the cache service being created and started alongside web and db. You can verify it with docker compose ps and docker compose logs cache.
Common Pitfalls & Troubleshooting
Even with a powerful tool like Docker Compose, things can sometimes go sideways. Here are a few common issues and how to tackle them:
- YAML Indentation Errors: YAML is very sensitive to whitespace! A single incorrect space can lead to
ERROR: yaml.scanner.ScannerErroror similar messages.- Solution: Use a good text editor that highlights YAML syntax (like VS Code with a YAML extension). Pay close attention to indentation – it’s usually 2 spaces per level, never tabs!
- Service Startup Order vs. Readiness:
depends_ononly guarantees that one service starts before another. It doesn’t wait for the dependent service to be fully ready (e.g., the database accepting connections).- Solution: For production-grade applications, use Docker’s
healthcheckdirectives in yourdocker-compose.ymlto define when a service is truly ready. For development, a simple retry loop in your application code (like we did inapp.py) is often sufficient.
- Solution: For production-grade applications, use Docker’s
- Port Conflicts: If you try to map a host port that’s already in use by another application or container, you’ll get an error like
port is already allocated.- Solution: Change the host port mapping (e.g.,
"8080:5000"instead of"80:5000"), or stop the application currently using that port.
- Solution: Change the host port mapping (e.g.,
- Networking Issues (Service Not Found): If one service can’t connect to another (e.g.,
webcan’t finddb), check your network configuration.- Solution: Ensure all services that need to communicate are part of the same Docker network. Always use the service name (e.g.,
dbfor the database host) when one container tries to reach another. Verify withdocker compose networks.
- Solution: Ensure all services that need to communicate are part of the same Docker network. Always use the service name (e.g.,
- Environment Variable Mismatches: If your application isn’t picking up configuration, check that the environment variables in
docker-compose.ymlmatch what your application expects.- Solution: Double-check variable names and values. Use
docker compose configto see the full resolved configuration, anddocker compose exec <service_name> printenvto inspect environment variables inside a running container.
- Solution: Double-check variable names and values. Use
Summary: Your Multi-Container Maestro!
Phew! You’ve just taken a huge leap in your Docker journey! Let’s recap what you’ve learned:
- Docker Compose is your go-to tool for defining and running multi-container Docker applications with ease.
- The
docker-compose.ymlfile acts as your application’s blueprint, declaring services, networks, and volumes. - You learned about key directives like
build,image,ports,environment,volumes,networks, anddepends_on. - You successfully built a two-service (web + database) application and even added a third service (Redis cache), demonstrating how seamlessly Docker Compose manages complex stacks.
- You’re now using the modern
docker compose(V2) command, integrated directly into the Docker CLI. - You’re familiar with essential commands like
docker compose up,docker compose down,docker compose ps, anddocker compose logs. - You’ve identified common pitfalls and learned basic troubleshooting techniques.
You’re no longer just running containers; you’re orchestrating them into a harmonious application! This skill is absolutely fundamental for any modern development workflow.
What’s next? While Docker Compose is fantastic for development and single-host deployments, what happens when you need to scale your application across multiple servers or handle complex deployments in a production environment? In the next chapter, we’ll begin to explore container orchestration platforms like Docker Swarm (and a peek towards Kubernetes!), which are designed for just such scenarios. Get ready to take your container skills to the cloud!