Introduction: Your Docker Image Recipe Book

Welcome back, future Docker master! In our previous chapters, you learned the basics of running Docker containers from existing images. You pulled images, ran them, and even explored their insides a bit. That’s a fantastic start! But what if you need to run your own custom application? What if no existing image perfectly fits your needs?

That’s where this chapter comes in! Today, we’re diving into the heart of Docker customization: Dockerfiles. Think of a Dockerfile as a detailed recipe for baking your very own Docker image. It’s a text file that contains all the instructions Docker needs to assemble an image, layer by layer. By the end of this chapter, you’ll not only understand what Dockerfiles are but also how to write one to package your own applications into pristine, reproducible Docker images.

This is a crucial step towards mastering Docker for production environments. Being able to craft efficient and secure custom images is a fundamental skill. So, get ready to roll up your sleeves – we’re about to get hands-on!

Core Concepts: Understanding the Dockerfile Blueprint

Before we start writing code, let’s understand the core ideas behind a Dockerfile.

What is a Dockerfile? (The Recipe Analogy)

Imagine you want to bake a cake. You wouldn’t just throw ingredients into a bowl randomly, right? You’d follow a recipe: “First, preheat oven. Then, mix flour and sugar. Add eggs…”

A Dockerfile is exactly like that recipe, but for a Docker image. It’s a plain text file (usually named Dockerfile, without any file extension) that contains a series of instructions. Each instruction creates a new “layer” in your Docker image. When you “build” an image from a Dockerfile, Docker executes these instructions sequentially, creating an immutable snapshot of your application and its environment.

Why Use Dockerfiles?

  1. Reproducibility: A Dockerfile ensures that anyone who builds your image will get the exact same environment and application, every single time. No more “it works on my machine!”
  2. Automation: You can automate the entire build process, integrating it into your Continuous Integration/Continuous Deployment (CI/CD) pipelines.
  3. Version Control: Since it’s a plain text file, you can store your Dockerfile in version control systems like Git, tracking changes and collaborating with ease.
  4. Transparency: The Dockerfile clearly shows what’s inside your image and how it was built, making it easier to understand, audit, and debug.

Key Dockerfile Instructions (The Recipe Steps)

Dockerfiles use specific keywords (instructions) to tell Docker what to do. Let’s look at the most common ones you’ll encounter and use today:

  • FROM: This is always the first instruction. It specifies the base image your image will be built upon. Think of it as starting your cake recipe with “Take one pre-made vanilla cake base.” Using a minimal, official base image is a best practice for security and efficiency.

    • Example: FROM python:3.11-slim-bookworm (We’ll use this today!)
  • WORKDIR: Sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, or ADD instructions that follow it. It’s like saying “All subsequent kitchen operations will happen on this counter space.”

    • Example: WORKDIR /app
  • COPY: Copies files or directories from your local machine (the “build context” – more on this in a bit) into the Docker image.

    • Example: COPY requirements.txt . (Copies requirements.txt into the current WORKDIR inside the image).
  • RUN: Executes any command during the image build process. This is where you’d install software, create directories, or compile code. Each RUN instruction creates a new layer.

    • Example: RUN pip install -r requirements.txt
  • EXPOSE: Informs Docker that the container will listen on the specified network ports at runtime. This is purely documentation; it doesn’t actually publish the port. It’s like writing “This cake will be served at 100 degrees Celsius” on the recipe. You still need to “preheat the oven” (map the port) when you run the container.

    • Example: EXPOSE 5000
  • CMD: Provides default commands and arguments for an executing container. There can only be one CMD instruction in a Dockerfile. If you specify a command when running docker run, it overrides the CMD instruction.

    • Example: CMD ["python", "app.py"]
  • ENTRYPOINT: Similar to CMD, but it sets the primary command that will always be executed when the container starts. CMD then provides default arguments to this ENTRYPOINT. It’s often used when you want your image to behave like an executable. We’ll stick to CMD for now for simplicity, but it’s good to know ENTRYPOINT exists for more advanced scenarios.

  • ENV: Sets environment variables. These variables are available to subsequent instructions in the Dockerfile and also to the running container.

    • Example: ENV MY_VARIABLE=hello
  • ARG: Defines build-time variables that users can pass to the builder with the docker build --build-arg <varname>=<value> command. These variables are not available in the running container.

  • LABEL: Adds metadata to an image, such as author information, version, or license.

  • USER: Sets the user name or UID to use when running the image and for any RUN, CMD, and ENTRYPOINT instructions that follow it. Best practice: Always try to run your containers as a non-root user for security reasons. We’ll touch on this more in later chapters, but it’s good to be aware of now.

That’s a lot of new terms! Don’t worry, we’ll build our understanding step-by-step with practical examples.

Step-by-Step Implementation: Building Our First Python Flask App Image

Let’s put these concepts into practice by creating a simple Python Flask web application and packaging it into a Docker image. Our app will simply say “Hello from Docker!” when accessed via a web browser.

Step 1: Set Up Our Project Directory

First, let’s create a new directory for our project. Open your terminal or command prompt.

mkdir my-flask-app
cd my-flask-app

Great! Now you’re inside your new project folder.

Step 2: Create Our Python Flask Application (app.py)

Inside my-flask-app, create a file named app.py and add the following Python code. This is a very basic Flask web server.

# my-flask-app/app.py
from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello():
    return "Hello from Docker! I'm running a Flask app!"

if __name__ == '__main__':
    # Listen on all available network interfaces (0.0.0.0) and port 5000
    app.run(host='0.0.0.0', port=5000)

Explanation:

  • We import the Flask framework.
  • We create an instance of the Flask application.
  • The @app.route('/') decorator tells Flask to execute the hello() function when someone visits the root URL (/).
  • The hello() function simply returns a friendly string.
  • app.run(host='0.0.0.0', port=5000) makes our Flask app listen for incoming connections on port 5000 from any network interface on the container. This 0.0.0.0 is crucial inside a Docker container so it’s accessible from outside.

Step 3: Define Our Python Dependencies (requirements.txt)

Our Flask app needs the flask library to run. We’ll list this dependency in a requirements.txt file, which is a standard practice in Python projects. Create a file named requirements.txt in the same directory (my-flask-app/).

# my-flask-app/requirements.txt
Flask==3.0.3

Explanation:

  • We specify Flask==3.0.3. While Flask alone would work, explicitly pinning versions (==3.0.3) is a best practice for reproducibility. This ensures that your application always uses the exact same version of the library, preventing unexpected breakages from new library releases. (As of December 2025, Flask 3.0.3 is a recent stable release).

Step 4: Crafting Our Dockerfile – Incrementally!

Now for the main event: creating our Dockerfile. Make sure you’re still in the my-flask-app directory. Create a file named Dockerfile (no extension!) and open it.

Part A: Starting with a Base Image (FROM)

Every Dockerfile begins with a FROM instruction. This specifies the base image our custom image will be built upon.

Add this line to your Dockerfile:

# my-flask-app/Dockerfile
FROM python:3.11-slim-bookworm

Explanation:

  • FROM python:3.11-slim-bookworm tells Docker to start with the official Python image, specifically version 3.11, which is a recent stable version as of December 2025.
  • We’re using slim-bookworm instead of just 3.11.
    • slim images are smaller and contain only the minimal packages needed for Python, reducing the attack surface and image size – a key production best practice!
    • bookworm refers to the Debian release (Debian 12) that the Python slim image is based on. Using specific Debian releases helps ensure greater stability and reproducibility.

Part B: Setting the Working Directory (WORKDIR)

Next, let’s tell Docker where our application files will live inside the container.

Add this line below FROM:

# my-flask-app/Dockerfile
FROM python:3.11-slim-bookworm
WORKDIR /app

Explanation:

  • WORKDIR /app sets /app as the default directory for all subsequent instructions in this Dockerfile. This means when we COPY files, they’ll go into /app, and when we RUN commands, they’ll execute from /app. It keeps our image organized.

Part C: Copying Dependencies and Installing Them (COPY, RUN)

It’s a best practice to copy and install dependencies before copying the rest of your application code. Why? Because Docker uses caching! If your dependencies (requirements.txt) don’t change, Docker can reuse the layer that installed them, speeding up subsequent builds.

Add these lines below WORKDIR:

# my-flask-app/Dockerfile
FROM python:3.11-slim-bookworm
WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

Explanation:

  • COPY requirements.txt .: This copies our requirements.txt file from your local my-flask-app directory (the “build context”) into the /app directory inside the image. The . refers to the current WORKDIR (/app).
  • RUN pip install --no-cache-dir -r requirements.txt: This executes the pip install command during the image build.
    • pip install -r requirements.txt reads the dependencies from requirements.txt and installs them.
    • --no-cache-dir is a crucial optimization! It tells pip not to store its downloaded packages in a cache directory. This significantly reduces the final size of your Docker image, which is vital for efficient deployments.

Part D: Copying the Application Code (COPY)

Now that our dependencies are installed, let’s copy our actual Flask application into the image.

Add this line below the RUN instruction:

# my-flask-app/Dockerfile
FROM python:3.11-slim-bookworm
WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

Explanation:

  • COPY . .: This copies everything from your local my-flask-app directory (the build context) into the /app directory inside the image. This includes our app.py file.

Part E: Exposing the Port (EXPOSE)

Remember our Flask app listens on port 5000? Let’s document that in our Dockerfile.

Add this line:

# my-flask-app/Dockerfile
FROM python:3.11-slim-bookworm
WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 5000

Explanation:

  • EXPOSE 5000: This declares that the container will listen on port 5000 at runtime. Again, this is documentation for anyone using your image. It doesn’t actually open the port on your host machine or automatically map it. We’ll do that when we run the container.

Part F: Defining the Default Command (CMD)

Finally, we need to tell Docker what command to run when a container starts from our image.

Add this last line:

# my-flask-app/Dockerfile
FROM python:3.11-slim-bookworm
WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 5000
CMD ["python", "app.py"]

Explanation:

  • CMD ["python", "app.py"]: This specifies the default command that will be executed when a container starts from this image. In our case, it runs our app.py script using the python interpreter. The list format ["executable", "param1", "param2"] is the preferred “exec form” for CMD as it allows Docker to run the command directly without a shell, which is generally more efficient and safer.

Your Dockerfile should now look like this:

# my-flask-app/Dockerfile
FROM python:3.11-slim-bookworm
WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 5000
CMD ["python", "app.py"]

Save your Dockerfile!

Step 5: Building Our Docker Image

With our Dockerfile complete, it’s time to build the image. Make sure you are in the my-flask-app directory in your terminal.

docker build -t my-flask-app:1.0 .

Let’s break down this command:

  • docker build: The command to build a Docker image from a Dockerfile.
  • -t my-flask-app:1.0: This is the tag for our image.
    • my-flask-app is the image name.
    • :1.0 is the version tag. It’s a good practice to tag your images with meaningful versions. If you omit the tag, Docker defaults to :latest.
  • .: This is the build context. It tells Docker where to find the Dockerfile and any files referenced by COPY instructions. The . means “the current directory”. Docker will send all files and folders in this directory to the Docker daemon to be used during the build. This is why it’s important to keep your build context clean!

When you run this, you’ll see Docker executing each instruction in your Dockerfile, creating a new layer for each step. If a layer hasn’t changed, Docker will use its cache, which makes subsequent builds much faster!

[+] Building 0.1s (10/10) FINISHED                                                                                                      docker:default
 => [internal] load build definition from Dockerfile                                                                                                0.0s
 => => transferring dockerfile: 205B                                                                                                                0.0s
 => [internal] load .dockerignore                                                                                                                   0.0s
 => => transferring context: 2B                                                                                                                     0.0s
 => [internal] load metadata for docker.io/library/python:3.11-slim-bookworm                                                                        0.0s
 => [1/6] FROM docker.io/library/python:3.11-slim-bookworm                                                                                          0.0s
 => [internal] load build context                                                                                                                   0.0s
 => => transferring context: 57B                                                                                                                    0.0s
 => CACHED [2/6] WORKDIR /app                                                                                                                       0.0s
 => CACHED [3/6] COPY requirements.txt .                                                                                                            0.0s
 => CACHED [4/6] RUN pip install --no-cache-dir -r requirements.txt                                                                                 0.0s
 => [5/6] COPY . .                                                                                                                                  0.0s
 => [6/6] EXPOSE 5000                                                                                                                               0.0s
 => CANCELED [7/6] CMD ["python", "app.py"]                                                                                                         0.0s
 => exporting to image                                                                                                                              0.0s
 => => exporting config c2e032123456...                                                                                                             0.0s
 => => writing image docker.io/library/my-flask-app:1.0 done                                                                                        0.0s

(Note: Output may vary slightly, especially if Docker uses cached layers.)

Once the build is complete, you can verify your image is available:

docker images

You should see my-flask-app with tag 1.0 in the list!

Step 6: Running Our Custom Docker Container

Now that we have our custom image, let’s run a container from it and see our Flask app in action!

docker run -p 5000:5000 my-flask-app:1.0

Let’s break this down:

  • docker run: The command to run a container.
  • -p 5000:5000: This is the port mapping (or port forwarding).
    • The first 5000 is the port on your host machine (your laptop/desktop).
    • The second 5000 is the port inside the container (where our Flask app is listening, as documented by EXPOSE 5000).
    • This command tells Docker: “When traffic comes to port 5000 on my host, forward it to port 5000 inside the my-flask-app:1.0 container.”
  • my-flask-app:1.0: The name and tag of the image we want to run.

You should see output from your Flask application in the terminal, indicating it’s running:

 * Serving Flask app 'app'
 * Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
 * Running on http://0.0.0.0:5000
Press CTRL+C to quit

Now, open your web browser and navigate to http://localhost:5000.

Voila! You should see “Hello from Docker! I’m running a Flask app!” displayed in your browser.

You’ve successfully built your first custom Docker image and run a container from it! Give yourself a pat on the back – this is a major milestone!

To stop the container, go back to your terminal where it’s running and press Ctrl+C.

Mini-Challenge: Evolve Your Flask App!

You’ve got the basics down. Now, let’s make a small change and see the Dockerfile build process in action again.

Your Challenge:

  1. Change the message: Modify app.py to return a different greeting, like “Hello from Docker! This is version 2!”
  2. Add a new dependency: Let’s say our app now needs to make HTTP requests. Add requests library to requirements.txt.
  3. Rebuild the image: Remember, Docker images are immutable. If you change your code or dependencies, you must rebuild the image. Use a new tag, like my-flask-app:2.0.
  4. Run the new container: Start a new container from your 2.0 image, mapping the port.
  5. Verify: Check your browser to ensure the new message appears.

Hint:

  • Don’t forget to save your changes to app.py and requirements.txt before rebuilding!
  • The docker build command will be very similar, just change the tag.
  • The docker run command will also be similar, using the new tag.

What to Observe/Learn:

  • When you rebuild, notice how Docker intelligently reuses cached layers for instructions that haven’t changed (like FROM and WORKDIR). Only the layers where changes occurred (like COPY or RUN for new dependencies) will be rebuilt. This is Docker’s layer caching in action, making builds super efficient!
  • You’ll see a new image ID for your 2.0 image, confirming it’s a distinct, updated version.

Take your time, try it out, and have fun!

Common Pitfalls & Troubleshooting

Building Docker images can sometimes lead to head-scratching moments. Here are a few common issues and how to tackle them:

  1. “No such file or directory” during COPY:

    • Problem: You’re trying to COPY a file (e.g., COPY app.py .) but Docker says it can’t find app.py.
    • Reason: This almost always means the file isn’t in your build context. Remember the . at the end of docker build -t my-app .? That tells Docker to look for files in the current directory. If your Dockerfile is in my-app/ but app.py is in my-app/src/, then COPY app.py . will fail because app.py isn’t directly in the build context.
    • Solution: Ensure all files referenced in COPY instructions are relative to the directory where your Dockerfile resides, or adjust your COPY instruction (e.g., COPY src/app.py .). Better yet, put your Dockerfile at the root of your project.
  2. Changes not appearing in the running container:

    • Problem: You modified app.py, rebuilt the image, ran the container, but the old code is still running!
    • Reason: You likely forgot to use a new tag for your image (e.g., still using my-flask-app:1.0 instead of my-flask-app:2.0) or didn’t explicitly tell docker run to use the new tag. Docker containers are instantiated from images, so if you don’t use the updated image, you won’t see the changes.
    • Solution: Always rebuild your image after code changes, and make sure docker run refers to the correct (latest) image tag. Sometimes, if you’re using :latest, it helps to explicitly docker rmi old_image_id to ensure you’re not accidentally caching an old image.
  3. Large Image Sizes:

    • Problem: Your Docker image is unexpectedly huge (hundreds of MBs or even GBs).
    • Reason:
      • Using a bloated base image: FROM ubuntu or FROM python:latest (which often includes development tools) can be large.
      • Copying unnecessary files: Your build context (.) might include temporary files, Git repositories (.git/), or development logs that don’t need to be in the final image.
      • Not cleaning up: Installing packages and not cleaning up package caches (like apt clean or pip --no-cache-dir).
    • Solution:
      • Choose slim base images: As we did with python:3.11-slim-bookworm.
      • Use .dockerignore: Create a file named .dockerignore in the same directory as your Dockerfile. It works just like .gitignore and tells Docker which files/folders not to send to the build context. For our Flask app, you might add:
        # .dockerignore
        .git
        .vscode
        __pycache__
        *.pyc
        venv/
        
        This significantly reduces the size of the build context and thus the final image.
      • Combine RUN commands: Each RUN instruction creates a new layer. Combining multiple commands into a single RUN instruction (using && to chain them) can reduce the number of layers and sometimes the overall size, especially if intermediate files are created and then deleted within the same RUN command.
  4. Port Confusion (EXPOSE vs. -p):

    • Problem: Your app is running in the container, but you can’t access it from your browser.
    • Reason: You might have EXPOSE 5000 in your Dockerfile (which is good documentation), but forgotten to use -p 5000:5000 when running docker run. Remember, EXPOSE doesn’t publish the port; -p does.
    • Solution: Always double-check your docker run -p command to ensure the host port is correctly mapped to the container port.

Summary: Your Image, Your Rules!

Congratulations! You’ve successfully navigated the world of Dockerfiles and built your very own custom Docker image. This is a monumental step in your Docker journey.

Here’s a quick recap of what we covered:

  • Dockerfiles as Recipes: They are plain text files that provide step-by-step instructions for building a Docker image.
  • Key Instructions: You learned about essential instructions like FROM, WORKDIR, COPY, RUN, EXPOSE, and CMD.
  • Layered Builds: Each instruction in a Dockerfile creates a new, cached layer, optimizing build times.
  • Building Images: You used docker build -t <name>:<tag> . to create your image.
  • Running Custom Containers: You launched a container from your custom image using docker run -p <host_port>:<container_port> <image_name>:<tag>.
  • Best Practices: We touched on using slim base images, pinning dependencies, pip install --no-cache-dir, and the importance of .dockerignore for smaller, more secure images.

You now have the power to package virtually any application into a Docker image, making it portable and reproducible.

What’s Next?

In the next chapter, we’ll take things up a notch. Building single-container applications is great, but real-world applications often consist of multiple services (e.g., a web app, a database, a cache). We’ll learn how to orchestrate these multi-container applications with Docker Compose, making development and deployment even smoother! Get ready to compose your first multi-service masterpiece!