Introduction: Building a Multi-Service Application

Welcome back, intrepid Docker explorer! So far, we’ve learned how to containerize individual applications and use Docker Compose to manage a few related services. But what about the truly complex, real-world applications? Almost every application needs to store data, and many benefit from fast data access through caching.

In this chapter, we’re going to level up our Docker Compose skills by integrating two crucial components into our application stack: a database for persistent data storage and a caching service for blazing-fast data retrieval. We’ll use PostgreSQL as our database and Redis as our caching layer, all orchestrated seamlessly with Docker Compose. This is where the magic of creating interconnected, robust applications truly shines!

By the end of this chapter, you’ll have a fully functional multi-service application running in Docker, complete with a web app, a database, and a cache. You’ll understand how these components communicate, how to manage their data, and why each plays a vital role in modern application architecture. Ready to build something awesome? Let’s dive in!

Core Concepts: Databases, Caching, and Docker Compose Harmony

Before we start writing code, let’s briefly touch upon the core ideas behind integrating a database and a caching service within our Dockerized world. Understanding why we do things this way will make the how much clearer.

Databases in Docker: Persistent Data, Isolated Environments

Imagine your application as a bustling restaurant. The database is like the kitchen’s pantry and recipe book – it’s where all the crucial ingredients (data) are stored and organized so that dishes (application features) can be prepared consistently.

When we run a database inside a Docker container, we get several benefits:

  • Isolation: The database runs in its own environment, separate from your application code. This means no messy installations on your host machine and no conflicts between different database versions.
  • Portability: Your database configuration and data are part of your Docker setup, making it easy to move your entire application (including its database) between different environments (development, staging, production).
  • Version Control: You can specify the exact database version you want (e.g., PostgreSQL 16) ensuring consistency across your team and deployments.

The biggest challenge with databases in containers is data persistence. If a container is removed, its data is gone! This is where Docker Volumes come to the rescue. By mounting a volume, we tell Docker to store the database’s data outside the container’s writable layer, ensuring it survives container restarts, updates, or even deletions. We covered volumes in a previous chapter, and now we’ll see them in action with a critical use case.

Caching with Docker: Speeding Up Your Application

If the database is the pantry, then a caching service like Redis is like a small, super-fast fridge right next to the chef. Instead of going all the way to the pantry for frequently used ingredients, the chef can grab them instantly from the fridge.

Caching stores frequently accessed data in a very fast, temporary storage layer (often in-memory). This drastically reduces the number of times your application has to query the (slower) main database, leading to:

  • Improved Performance: Faster response times for users.
  • Reduced Database Load: Less strain on your database, allowing it to handle more complex queries or a higher volume of less frequent requests.
  • Scalability: Caching can help your application handle more users without immediately needing to scale up your database.

Redis is a popular open-source, in-memory data structure store that can be used as a database, cache, and message broker. It’s incredibly fast and versatile, making it a perfect candidate for our caching layer. Running Redis in a Docker container provides the same isolation and portability benefits as our database.

Docker Compose: The Conductor of Our Orchestra

We’ve already seen how docker compose acts as the conductor for our multi-container symphony. In this chapter, it will orchestrate our web application, the PostgreSQL database, and the Redis caching service.

Remember, Docker Compose creates a default network for all services defined in a docker-compose.yml file. This means our web application container can reach the database container simply by using the database service’s name (e.g., db) as its hostname, and similarly for the Redis service (e.g., redis). No complex IP addresses needed! It’s all handled for us.

Step-by-Step Implementation: Building Our Multi-Service App

Alright, let’s get our hands dirty and build this application! We’ll start with a simple Python Flask application and then incrementally add our database and caching services.

Step 1: The Basic Flask Application

First, let’s set up our Flask application. We’ll create a new directory for our project.

  1. Create Project Directory: Open your terminal and create a new directory for our project:

    mkdir docker-db-cache-app
    cd docker-db-cache-app
    
  2. Create app.py: This will be our simple Flask application. Create a file named app.py inside your docker-db-cache-app directory and add the following code:

    # app.py
    from flask import Flask, render_template_string
    import os
    
    app = Flask(__name__)
    
    # A very basic Flask app for now
    @app.route('/')
    def hello():
        return render_template_string("<h1>Hello from our Dockerized App!</h1><p>We're about to add a database and a cache!</p>")
    
    if __name__ == '__main__':
        app.run(host='0.0.0.0', port=5000)
    
    • Explanation: This is a minimal Flask application. It defines one route (/) that returns a simple HTML message. app.run(host='0.0.0.0', port=5000) makes sure the application is accessible from outside the container.
  3. Create requirements.txt: We need to tell Docker what Python packages our Flask app depends on. Create requirements.txt in the same directory:

    Flask==3.0.3
    
    • Explanation: We’re explicitly pinning Flask to version 3.0.3, which is the latest stable as of December 2025. This ensures consistent builds.
  4. Create Dockerfile for the Flask App: Now, let’s create a Dockerfile to containerize our Flask application.

    # Dockerfile
    # Use the official Python 3.12 slim image as a base
    FROM python:3.12-slim-bookworm
    
    # Set the working directory in the container
    WORKDIR /app
    
    # Copy the requirements file into the container
    COPY requirements.txt .
    
    # Install the Python dependencies
    RUN pip install --no-cache-dir -r requirements.txt
    
    # Copy the rest of the application code into the container
    COPY . .
    
    # Expose port 5000, as our Flask app runs on it
    EXPOSE 5000
    
    # Define the command to run the Flask application
    CMD ["python", "app.py"]
    
    • Explanation:
      • FROM python:3.12-slim-bookworm: We start with a lightweight Python 3.12 image based on Debian Bookworm. Using slim images is a best practice for smaller, more secure containers.
      • WORKDIR /app: Sets the current directory inside the container to /app. All subsequent commands will run from here.
      • COPY requirements.txt .: Copies our requirements.txt into the /app directory.
      • RUN pip install --no-cache-dir -r requirements.txt: Installs our Flask dependency. --no-cache-dir is a best practice to keep image size down.
      • COPY . .: Copies all other files from our current directory (including app.py) into the container’s /app directory.
      • EXPOSE 5000: Informs Docker that the container listens on port 5000. This is just documentation; it doesn’t actually publish the port.
      • CMD ["python", "app.py"]: Specifies the command to run when the container starts.

Step 2: Adding PostgreSQL to docker-compose.yml

Now, let’s define our multi-service application using docker-compose.yml. We’ll start by adding our Flask app and the PostgreSQL database.

  1. Create docker-compose.yml: Create a file named docker-compose.yml in the docker-db-cache-app directory and add the following:

    # docker-compose.yml
    version: '3.8'
    
    services:
      web:
        build: .
        ports:
          - "5000:5000"
        depends_on:
          - db
        environment:
          DATABASE_URL: postgresql://user:password@db:5432/mydatabase
    
      db:
        image: postgres:16
        environment:
          POSTGRES_DB: mydatabase
          POSTGRES_USER: user
          POSTGRES_PASSWORD: password
        volumes:
          - db-data:/var/lib/postgresql/data
    
    volumes:
      db-data:
    
    • Explanation of web service:

      • build: .: Tells Docker Compose to build an image for this service using the Dockerfile in the current directory.
      • ports: - "5000:5000": Maps port 5000 on your host machine to port 5000 in the web container, allowing you to access the Flask app from your browser.
      • depends_on: - db: This is a soft dependency. It ensures the db service is started before the web service. However, it doesn’t wait for the database inside the container to be ready to accept connections. We’ll handle robust readiness checks in later chapters.
      • environment: DATABASE_URL: ...: We’re setting an environment variable that our Flask app will use to connect to the database. Notice db as the hostname – Docker Compose’s internal networking makes the service name resolvable to the container’s IP.
    • Explanation of db service:

      • image: postgres:16: We’re using the official PostgreSQL 16 image from Docker Hub. This is the latest stable version of PostgreSQL as of December 2025.
      • environment: These are crucial for configuring the PostgreSQL container.
        • POSTGRES_DB: Sets the name of the default database to create.
        • POSTGRES_USER: Sets the default user for the database.
        • POSTGRES_PASSWORD: Sets the password for the default user. Important: In a real-world scenario, you would use more secure passwords and manage them with environment variables or Docker secrets, not hardcode them!
      • volumes: - db-data:/var/lib/postgresql/data: This is where persistence comes in!
        • db-data: This refers to a named volume we’re defining at the bottom of the file. Docker will manage this volume on your host system.
        • /var/lib/postgresql/data: This is the default directory inside the PostgreSQL container where it stores its data files. By mounting db-data here, our database’s information will persist even if the db container is removed and recreated.
    • Explanation of volumes section:

      • db-data:: This simply declares a named volume called db-data. Docker will create and manage this volume for us.

Step 3: Adding Redis to docker-compose.yml

Now let’s add our caching service, Redis, to the docker-compose.yml.

  1. Update docker-compose.yml: Add a new redis service to your docker-compose.yml file:

    # docker-compose.yml (updated)
    version: '3.8'
    
    services:
      web:
        build: .
        ports:
          - "5000:5000"
        depends_on:
          - db
          - redis # Add redis dependency
        environment:
          DATABASE_URL: postgresql://user:password@db:5432/mydatabase
          REDIS_HOST: redis # Add Redis host environment variable
    
      db:
        image: postgres:16
        environment:
          POSTGRES_DB: mydatabase
          POSTGRES_USER: user
          POSTGRES_PASSWORD: password
        volumes:
          - db-data:/var/lib/postgresql/data
    
      redis: # New service for Redis
        image: redis:7-alpine # Use the latest stable Redis 7 with alpine for smaller image
        # No volumes needed for simple caching, as data is transient or can be rebuilt
        # For persistent Redis, you'd add a volume like - redis-data:/data
    
    volumes:
      db-data:
      # redis-data: # Uncomment if you need Redis persistence
    
    • Explanation of redis service:
      • image: redis:7-alpine: We’re using the official Redis 7 image, specifically the alpine variant, which is very small and efficient. This is the latest stable version of Redis as of December 2025.
      • We don’t typically need a volume for Redis if we’re just using it as a transient cache, as the data can be rebuilt from the primary database. If you needed Redis data to persist across container restarts, you would add a volume similar to the db service.
    • Updates to web service:
      • depends_on: - redis: We’ve added redis to the depends_on list.
      • environment: REDIS_HOST: redis: We’ve added another environment variable for our Flask app to know how to connect to Redis, again using the service name redis as the hostname.

Step 4: Connecting the Flask App to DB & Redis

Now that our docker-compose.yml defines all three services, let’s update our Flask application to actually connect to and use PostgreSQL and Redis.

  1. Update requirements.txt: Our Flask app now needs libraries to talk to PostgreSQL and Redis. Update your requirements.txt file:

    Flask==3.0.3
    psycopg2-binary==2.9.9
    redis==5.0.1
    
    • Explanation:
      • psycopg2-binary==2.9.9: A popular PostgreSQL adapter for Python (latest stable as of Dec 2025).
      • redis==5.0.1: The official Python client for Redis (latest stable as of Dec 2025).
  2. Update app.py: Now, let’s modify app.py to connect to both services and provide some basic functionality.

    # app.py (updated)
    from flask import Flask, render_template_string, request, redirect, url_for
    import os
    import psycopg2
    import redis
    import time
    
    app = Flask(__name__)
    
    # --- Database Configuration ---
    DB_URL = os.getenv('DATABASE_URL', 'postgresql://user:password@localhost:5432/mydatabase')
    REDIS_HOST = os.getenv('REDIS_HOST', 'localhost')
    REDIS_PORT = os.getenv('REDIS_PORT', 6379)
    
    # Function to connect to the database
    def get_db_connection():
        conn = psycopg2.connect(DB_URL)
        return conn
    
    # Function to connect to Redis
    def get_redis_client():
        r = redis.Redis(host=REDIS_HOST, port=REDIS_PORT, decode_responses=True)
        return r
    
    # Initialize database table if it doesn't exist
    def init_db():
        conn = get_db_connection()
        cur = conn.cursor()
        cur.execute("""
            CREATE TABLE IF NOT EXISTS messages (
                id SERIAL PRIMARY KEY,
                content TEXT NOT NULL,
                timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
            );
        """)
        conn.commit()
        cur.close()
        conn.close()
    
    # Call init_db when the app starts
    with app.app_context():
        init_db()
    
    @app.route('/', methods=['GET', 'POST'])
    def home():
        r = get_redis_client()
        db_conn = get_db_connection()
        cur = db_conn.cursor()
    
        # Handle message submission
        if request.method == 'POST':
            message_content = request.form['content']
            if message_content:
                cur.execute("INSERT INTO messages (content) VALUES (%s)", (message_content,))
                db_conn.commit()
                # Invalidate cache for messages
                r.delete('all_messages')
            return redirect(url_for('home'))
    
        # Get messages from DB, try cache first
        messages = r.get('all_messages')
        if messages:
            messages = eval(messages) # Simple eval, in real app use JSON
            print("Messages loaded from Redis cache!")
        else:
            cur.execute("SELECT content, timestamp FROM messages ORDER BY timestamp DESC")
            messages = cur.fetchall()
            r.setex('all_messages', 60, str(messages)) # Cache for 60 seconds
            print("Messages loaded from PostgreSQL database and cached!")
    
        # Redis counter
        page_views = r.incr('page_views_counter')
    
        cur.close()
        db_conn.close()
    
        html_template = """
        <!doctype html>
        <title>Docker DB & Cache App</title>
        <h1>Welcome to our Docker Multi-Service App!</h1>
        <p>This page has been viewed {{ page_views }} times.</p>
    
        <h2>Submit a Message</h2>
        <form method="post">
            <input type="text" name="content" placeholder="Your message" required>
            <button type="submit">Add Message</button>
        </form>
    
        <h2>Messages from PostgreSQL</h2>
        <ul>
            {% for msg_content, msg_timestamp in messages %}
            <li><strong>{{ msg_content }}</strong> <small>({{ msg_timestamp.strftime('%Y-%m-%d %H:%M:%S') }})</small></li>
            {% else %}
            <li>No messages yet!</li>
            {% endfor %}
        </ul>
        """
        return render_template_string(html_template, page_views=page_views, messages=messages)
    
    if __name__ == '__main__':
        app.run(host='0.0.0.0', port=5000)
    
    • Explanation of app.py changes:
      • Imports: Added psycopg2, redis, request, redirect, url_for, and time.
      • Configuration: DB_URL and REDIS_HOST/REDIS_PORT are now read from environment variables (which we set in docker-compose.yml). If not found, they default to localhost for local testing.
      • get_db_connection(): Establishes a connection to PostgreSQL using the DB_URL.
      • get_redis_client(): Establishes a connection to Redis using REDIS_HOST and REDIS_PORT.
      • init_db(): This function connects to the database and creates a messages table if it doesn’t already exist. We call this using with app.app_context(): init_db() to ensure it runs when the Flask app starts up.
      • home() route:
        • Gets Redis and DB connections.
        • Message Submission (POST): If a message is submitted, it’s inserted into the messages table in PostgreSQL. Crucially, after inserting, we r.delete('all_messages') to invalidate the cache, ensuring the next GET request fetches fresh data from the DB.
        • Message Retrieval (GET): It first tries to fetch all_messages from Redis. If found, it uses the cached data. If not, it queries PostgreSQL, fetches the messages, and then stores them in Redis with a 60-second expiration (r.setex) before returning them. This demonstrates a common cache-aside pattern.
        • Redis Counter: r.incr('page_views_counter') atomically increments a counter in Redis, perfect for simple, fast metrics.
        • Closes DB connections and cursors (good practice!).
        • The html_template now displays the page view count and lists messages from the database.

Step 5: Building and Running Our Multi-Service Application

We’ve got all the pieces! Now, let’s bring them to life with Docker Compose.

  1. Build the Images: Open your terminal in the docker-db-cache-app directory (where docker-compose.yml is located) and run:

    docker compose build
    
    • Explanation: This command tells Docker Compose to build the image for our web service. Since we updated requirements.txt, it will fetch and install psycopg2-binary and redis. It also ensures the postgres and redis images are pulled if not already present.
  2. Start the Services: Once the build is complete, start all services:

    docker compose up -d
    
    • Explanation:
      • docker compose up: Starts all services defined in docker-compose.yml.
      • -d: Runs the containers in “detached” mode (in the background), so your terminal is free.
  3. Verify Services are Running: You can check the status of your containers:

    docker compose ps
    

    You should see web, db, and redis containers listed with Up status.

  4. Access the Application: Open your web browser and navigate to http://localhost:5000.

    • You should see your Flask application.
    • Try submitting a message. It will be stored in PostgreSQL.
    • Refresh the page. Notice the page view counter incrementing (thanks to Redis!).
    • Submit a few messages. Refresh. The messages should appear.
    • Observe your terminal where you ran docker compose up -d. If you had run it without -d, you’d see messages like “Messages loaded from PostgreSQL database and cached!” or “Messages loaded from Redis cache!” in the web container’s logs, indicating whether it hit the cache or the database. You can still view these logs with docker compose logs web.

    Go ahead, play around with it! Add messages, refresh, and see how the caching works.

  5. Stop and Clean Up: When you’re done, stop the services:

    docker compose down
    
    • Explanation: This stops and removes the containers and the default network created by Docker Compose.

    If you want to remove the database data volume as well (e.g., to start fresh), you can use:

    docker compose down --volumes
    
    • Explanation: --volumes (or -v) will also remove the named volumes defined in your docker-compose.yml, including db-data. Be careful with this in production!

Mini-Challenge: Enhance the Caching

You’ve successfully built a multi-service application! Now, let’s put your understanding of caching to the test.

Challenge: Add a new endpoint to our Flask application, /status, that returns the current timestamp. However, instead of generating a new timestamp every time, cache this timestamp in Redis for 10 seconds. If someone accesses /status within 10 seconds of the last request, they should get the cached timestamp. After 10 seconds, a new timestamp should be generated and cached.

Hint:

  • You’ll need to add a new route @app.route('/status') in app.py.
  • Use r.get() to check if the timestamp is in Redis.
  • If not, use datetime.datetime.now().strftime(...) to get the current timestamp and r.setex('status_timestamp', 10, new_timestamp_string) to cache it for 10 seconds.
  • Remember to import datetime.

What to Observe/Learn:

  • How to implement a simple time-based cache for an endpoint.
  • The difference in response when the cache is hit versus when it’s missed.
  • The power of Redis for quick, temporary data storage.

Give it a shot! If you get stuck, that’s perfectly normal. The process of debugging and problem-solving is a huge part of learning.

Common Pitfalls & Troubleshooting

Working with multi-service applications can introduce new challenges. Here are a few common issues and how to tackle them:

  1. “Can’t connect to database” / “Connection refused” errors:

    • Check depends_on: Ensure your web service has depends_on: - db (and - redis). While this doesn’t guarantee readiness, it ensures the db container starts first.
    • Database Readiness: Databases take time to initialize. Your application might try to connect before PostgreSQL is fully ready to accept connections. In a real-world app, you’d use a health check or a retry mechanism in your application code (e.g., a loop that tries to connect several times with a delay). For now, a simple restart of the web service (docker compose restart web) might resolve it if the db service was just slow to start.
    • Environment Variables: Double-check DATABASE_URL in your web service and POSTGRES_DB, POSTGRES_USER, POSTGRES_PASSWORD in your db service in docker-compose.yml. Even a typo can prevent connection. Ensure the hostname in DATABASE_URL is db (the service name), not localhost.
    • Ports: Verify that PostgreSQL is listening on its default port (5432) and your DATABASE_URL specifies it correctly.
  2. “Volume permission denied” or “error creating volume”:

    • This can happen if Docker doesn’t have the necessary permissions to create or write to the named volume.
    • Ensure your Docker Desktop (or Docker daemon) is running with appropriate permissions.
    • On Linux, sometimes sudo is required for docker compose commands if your user isn’t in the docker group.
  3. Changes not reflecting after docker compose up:

    • If you change your Dockerfile or requirements.txt, you must rebuild the image: docker compose build. Then, docker compose up -d to restart with the new image.
    • If you change app.py (and you’re using COPY . .), simply running docker compose up -d will often pick up the changes if the container is recreated or if your app has a hot-reloading feature. If not, a docker compose restart web might be needed.
  4. “Service ‘db’ (or ‘redis’) depends on service ‘web’ which is undefined”:

    • This indicates a typo in your docker-compose.yml or an incorrect indentation. YAML is very sensitive to whitespace! Double-check your service names and ensure they are correctly nested under services:.

Remember, the docker compose logs [service_name] command is your best friend for debugging. For example, docker compose logs web will show you what’s happening inside your Flask application container.

Summary: Orchestrating a Complete Application Stack

Fantastic work! You’ve just built a robust, multi-service application using Docker Compose, integrating a database and a caching layer. This is a significant step towards understanding real-world application deployments.

Here are the key takeaways from this chapter:

  • Multi-Service Power: Docker Compose is invaluable for defining and running interconnected services like web apps, databases, and caches in a single, coherent environment.
  • Data Persistence with Volumes: Named volumes are crucial for ensuring that your database data (and other critical data) persists across container lifecycles.
  • Performance with Caching: Integrating a caching service like Redis significantly boosts application performance by reducing database load and speeding up data retrieval.
  • Seamless Networking: Docker Compose automatically sets up a network, allowing services to communicate with each other using their service names as hostnames.
  • Environment Variables for Configuration: Using environment variables (e.g., DATABASE_URL, REDIS_HOST) is a clean and flexible way to configure your application for different environments.
  • Incremental Development: We built our application step-by-step, adding complexity gradually, which is a great pattern for any development project.

You’re now well-equipped to build more sophisticated applications that handle data persistence and performance. In the next chapter, we’ll dive deeper into more advanced Docker Compose features, including custom networks, health checks, and perhaps even some scaling! Keep up the great work!