Introduction: Automating Your Workflow with Docker and CI/CD
Welcome back, future Docker master! In our journey so far, you’ve learned to containerize applications, manage multiple services with Compose, and understand the power of isolated environments. Now, it’s time to put those skills to work on a concept that truly revolutionizes software development: Continuous Integration/Continuous Delivery (CI/CD).
CI/CD is all about automating the process of building, testing, and deploying your code. It helps catch bugs earlier, ensures consistent quality, and speeds up your development cycle. While full-fledged CI/CD systems like GitHub Actions or GitLab CI can be complex, this chapter will introduce you to the core principles by building a simplified CI pipeline right on your local machine, powered entirely by Docker. You’ll see how Docker’s consistent environments are a perfect fit for ensuring your code builds and tests the same way, every time.
By the end of this chapter, you’ll understand the basics of CI/CD, appreciate why Docker is indispensable for it, and have built a small, functional pipeline to build and test a simple application. To get the most out of this chapter, make sure you’re comfortable with Dockerfiles, building images, and running containers, as covered in previous chapters. Let’s make your development workflow smarter, not harder!
Core Concepts: What is CI/CD, and Why Docker?
Before we jump into the code, let’s unpack what CI/CD means in simple terms and why Docker is such a game-changer here.
What is CI/CD (Simplified)?
Imagine you’re working on a team project. Everyone writes code and eventually merges it into a main branch. Without CI/CD, this can quickly become a chaotic mess:
- “It works on my machine!” becomes a common cry.
- Bugs might only be found days later, making them harder to fix.
- Deployments are manual, slow, and error-prone.
CI/CD solves this by automating key steps:
- Continuous Integration (CI): Every time a developer pushes new code, an automated system (our “CI pipeline”) kicks in. It fetches the latest code, builds the application, and runs automated tests. If anything fails, the developers are immediately notified. This ensures that the main codebase is always in a working, tested state.
- Continuous Delivery (CD): Once the code passes all CI checks, Continuous Delivery automates the process of preparing it for release. This often means deploying the application to a staging environment where further manual or automated quality checks can happen.
- Continuous Deployment: This takes CD a step further by automatically deploying every successful change to production, without human intervention.
For this chapter, we’ll focus primarily on the Continuous Integration aspect – automating the build and test phases using Docker.
Why Docker for CI/CD? The Consistency Advantage
“It works on my machine!” is a classic developer lament. Docker fundamentally eliminates this problem in a CI/CD context.
Think about it:
- Environment Consistency: Your local development environment often differs slightly from your CI server, which might differ from your staging server, and so on. Docker provides a consistent, isolated environment. The exact same
Dockerfileused to build your application locally can be used by your CI server to build and test it. This guarantees that if it works in one Docker container, it will work in another identical one. - Isolation: Each build and test run can happen in a fresh, clean Docker container, isolated from previous runs and other processes on the CI server. This prevents “pollution” from old dependencies or conflicting configurations.
- Reproducibility: You can always reproduce the exact environment where your code was built and tested simply by running the same Docker image. This is invaluable for debugging and auditing.
- Dependency Management: Your
Dockerfileexplicitly lists all dependencies. The CI pipeline doesn’t need to worry about installing specific versions of Python, Node.js, or database clients on the host machine; Docker handles it all within the container. - Speed (with caching): Docker’s layered filesystem and build cache can significantly speed up CI builds. If a layer (like installing dependencies) hasn’t changed, Docker can reuse the cached layer, saving valuable time.
In essence, Docker acts as a portable, self-contained “mini-machine” for your code, ensuring that your CI/CD pipeline always operates in a predictable and reliable environment.
Step-by-Step Implementation: Building Our Simplified CI Pipeline
Let’s get our hands dirty and build a basic CI workflow. We’ll create a simple Python Flask application, write a tiny test for it, and then set up Dockerfiles and a script to automate building and testing.
Step 1: Project Setup - Our Simple Application
First, let’s create a new directory for our project.
Create Project Directory: Open your terminal and create a new folder:
mkdir docker-ci-project cd docker-ci-projectCreate a Flask Application (
app.py): We’ll create a very basic Flask web server. In yourdocker-ci-projectdirectory, create a file namedapp.pyand add the following code:# app.py from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello, Docker CI! This is version 1.0!' if __name__ == '__main__': app.run(host='0.0.0.0', port=5000)- Explanation: This is a standard Flask application. It initializes a Flask app, defines a single route
/that returns “Hello, Docker CI! This is version 1.0!”, and runs the app on0.0.0.0:5000when executed directly.
- Explanation: This is a standard Flask application. It initializes a Flask app, defines a single route
Create
requirements.txt: Flask has dependencies, so we need arequirements.txtfile. Create this file in the same directory:# requirements.txt Flask==3.0.3- Explanation: This file lists the Python packages our application needs. We’re pinning to
Flask==3.0.3which is the latest stable version as of December 2025 (or a very recent one, as 3.0.x has been stable for a while). Pinning versions is a best practice for reproducibility!
- Explanation: This file lists the Python packages our application needs. We’re pinning to
Create a Simple Test File (
test_app.py): We need a way to test our application. For simplicity, we’ll use Python’s built-inunittestmodule, but in a real project, you’d use a more robust framework likepytest. Createtest_app.py:# test_app.py import unittest from app import app class TestApp(unittest.TestCase): def setUp(self): # Set up a test client for the Flask app self.app = app.test_client() self.app.testing = True def test_hello_world_route(self): # Send a GET request to the root URL response = self.app.get('/') # Check if the response status code is 200 OK self.assertEqual(response.status_code, 200) # Check if the response data contains our expected message self.assertIn(b'Hello, Docker CI!', response.data) self.assertIn(b'version 1.0', response.data) def test_another_check(self): # A placeholder for another test self.assertTrue(True) if __name__ == '__main__': unittest.main()- Explanation: This file contains a basic test class.
setUpcreates a test client for our Flask app.test_hello_world_routesends a request to/and asserts that we get a 200 status code and the expected “Hello, Docker CI!” message in the response.test_another_checkis just a placeholder to show multiple tests.
- Explanation: This file contains a basic test class.
Your project directory should now look like this:
docker-ci-project/
├── app.py
├── requirements.txt
└── test_app.py
Step 2: Dockerfile for the Application Image
Now, let’s create a Dockerfile to containerize our Flask application. This will be the image our CI pipeline builds.
Create a file named Dockerfile (no extension) in your docker-ci-project directory:
# Dockerfile
# Use a lightweight official Python image as our base.
# Python 3.12 is the latest stable series as of Dec 2025.
# 'slim-bookworm' provides a minimal Debian environment.
FROM python:3.12-slim-bookworm
# Set the working directory inside the container
WORKDIR /app
# Copy the requirements file first to leverage Docker's build cache.
# If requirements.txt doesn't change, this layer won't be rebuilt.
COPY requirements.txt .
# Install dependencies.
# Using --no-cache-dir to keep the image size down.
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of our application code into the container
COPY . .
# Expose the port our Flask app will run on
EXPOSE 5000
# Define the command to run our application when the container starts
CMD ["python", "app.py"]
- Explanation:
FROM python:3.12-slim-bookworm: We’re using a specific, stable version of Python (3.12) as our base image.slim-bookwormis a minimal Debian-based image, which is great for keeping image sizes small. Always specify versions!WORKDIR /app: All subsequent commands will execute inside the/appdirectory within the container.COPY requirements.txt .: We copy onlyrequirements.txtfirst.RUN pip install --no-cache-dir -r requirements.txt: We install the Python dependencies.--no-cache-dirensurespipdoesn’t store downloaded packages, further reducing image size.COPY . .: Now we copy all other files from our current directory (where the Dockerfile is) into the/appdirectory in the container.EXPOSE 5000: This informs Docker that the container listens on port 5000 at runtime. It’s documentation, not a firewall rule.CMD ["python", "app.py"]: This is the default command that will run when a container is started from this image. It launches our Flask application.
Step 3: Building the Application Image (Our First CI Step)
Now, let’s build this application image. This is the first step a CI pipeline would typically perform.
In your terminal, from the docker-ci-project directory:
docker build -t my-flask-app:latest .
- Explanation:
docker build: The command to build a Docker image.-t my-flask-app:latest: We’re tagging our image with the namemy-flask-appand the taglatest. You could use version numbers likemy-flask-app:1.0.0for better version control..: This specifies the build context – meaning Docker should look for theDockerfileand other necessary files in the current directory.
You should see Docker building the image layer by layer. If successful, you’ll have my-flask-app:latest in your local image registry. You can verify with docker images.
Step 4: Running Tests in a Docker Container (Our Second CI Step)
This is a crucial part of our CI. We want to run our tests inside a Docker container, using the same environment as our application, without actually running the Flask server itself.
We have a few options here. For maximum clarity in a CI context, it’s often beneficial to have a separate image specifically for running tests, especially if tests require different dependencies or setup.
Create a
Dockerfile.test: Create a new file namedDockerfile.testin yourdocker-ci-projectdirectory:# Dockerfile.test # Use the same base image as our application for consistency FROM python:3.12-slim-bookworm # Set the working directory WORKDIR /app # Copy the requirements and install them, just like the app image COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt # Copy our application code and the test file COPY app.py . COPY test_app.py . # Install the 'requests' library for the test client to work properly # (Flask's test client often benefits from this for full functionality) RUN pip install --no-cache-dir requests # Define the command to run our tests # We use 'python -m unittest' to discover and run tests CMD ["python", "-m", "unittest", "test_app.py"]- Explanation:
FROM python:3.12-slim-bookworm: Again, consistency is key!WORKDIR /app,COPY requirements.txt .,RUN pip install ...: Same setup as the app image.COPY app.py .andCOPY test_app.py .: We need both the application code (becausetest_app.pyimportsapp) and the test file itself.RUN pip install --no-cache-dir requests: While not strictly required forunittest.TestCasedirectly, Flask’stest_clientoften behaves better withrequestsinstalled, and it’s a good practice to ensure all potential runtime test dependencies are present.CMD ["python", "-m", "unittest", "test_app.py"]: This is the command that runs our test script.python -m unittestis the standard way to run unittest modules.
- Explanation:
Build the Test Image: Now, build this test-specific image:
docker build -f Dockerfile.test -t my-flask-app-test:latest .- Explanation:
-f Dockerfile.test: This tells Docker to useDockerfile.testinstead of the defaultDockerfile.-t my-flask-app-test:latest: We give this image a distinct tag.
- Explanation:
Run the Tests: Finally, run a container from our test image. This will execute the
CMDwe defined, which runs our tests.docker run my-flask-app-test:latestYou should see output similar to this, indicating your tests passed:
.. ---------------------------------------------------------------------- Ran 2 tests in X.XXXs OK- Explanation: The
docker runcommand starts a container frommy-flask-app-test:latest. Since we definedCMD ["python", "-m", "unittest", "test_app.py"]inDockerfile.test, that command is executed. TheOKat the end means our tests passed!
- Explanation: The
Step 5: Automating with a Simple CI Script
Running two docker build and one docker run command manually isn’t very “automated.” In a real CI system, these commands would be part of a script or configuration file. Let’s create a simple shell script to tie it all together.
Create a file named ci.sh in your docker-ci-project directory:
#!/bin/bash
echo "--- Starting Simplified Docker CI Pipeline ---"
# --- Step 1: Build the Application Image ---
echo "Building application image: my-flask-app:latest"
docker build -t my-flask-app:latest .
# Check if the build command was successful (exit code 0)
if [ $? -ne 0 ]; then
echo "ERROR: Application image build failed!"
exit 1 # Exit the script with an error code
fi
echo "Application image built successfully."
# --- Step 2: Build the Test Image ---
echo "Building test image: my-flask-app-test:latest"
docker build -f Dockerfile.test -t my-flask-app-test:latest .
if [ $? -ne 0 ]; then
echo "ERROR: Test image build failed!"
exit 1
fi
echo "Test image built successfully."
# --- Step 3: Run Tests ---
echo "Running tests in container..."
docker run --rm my-flask-app-test:latest
if [ $? -ne 0 ]; then
echo "ERROR: Tests failed!"
exit 1
fi
echo "Tests passed! CI pipeline succeeded."
echo "--- Simplified Docker CI Pipeline Finished ---"
- Explanation:
#!/bin/bash: This is a shebang line, telling the system to execute the script withbash.echo "...": We’re adding informative messages so we know what’s happening.docker build ...: These are the same build commands we ran manually.if [ $? -ne 0 ]; then ... fi: This is crucial for CI!$?holds the exit code of the last executed command. If it’s not0(which indicates success), we print an error andexit 1to stop the script, signaling a failure. This mimics how a real CI system would mark a build as failed.docker run --rm my-flask-app-test:latest: We added--rmhere. This automatically removes the container after it exits, keeping your system clean.
Now, make the script executable and run it:
chmod +x ci.sh
./ci.sh
You should see the entire process unfold in your terminal, culminating in “Tests passed! CI pipeline succeeded.” This is your very own, simplified Docker-powered CI pipeline!
Step 6 (Optional): Pushing to a Registry (Simplified CD Preparation)
While this chapter focuses on CI, a common next step after successful tests is to push your application image to a Docker Registry (like Docker Hub, Amazon ECR, GitLab Registry, etc.). This makes the image available for deployment to other environments (staging, production).
First, ensure you are logged into Docker Hub (or your chosen registry):
docker login
Then, you’d typically tag your application image with your Docker Hub username and push it:
# Replace 'your-dockerhub-username' with your actual username
docker tag my-flask-app:latest your-dockerhub-username/my-flask-app:1.0
docker push your-dockerhub-username/my-flask-app:1.0
You could add these commands to your ci.sh script (after successful tests) to automate the “delivery” part.
Mini-Challenge: Break the Build!
Now that you have a working CI pipeline, let’s experience the joy (and pain!) of a failing build. This is how CI helps you catch issues quickly.
Challenge:
Modify either app.py or test_app.py to intentionally introduce a bug or a failing test. Then, run your ci.sh script and observe how the pipeline fails. Finally, fix the issue and run the script again to confirm it passes.
Hint:
- To break
app.py: Introduce a syntax error, e.g., changeapp = Flask(__name__)toapp = Flask__name__). This will cause the build to fail if it tries to run the app, or the tests to fail if the app can’t start. - To break
test_app.py: Change an assertion to expect the wrong value, e.g.,self.assertIn(b'Wrong Message!', response.data)orself.assertEqual(response.status_code, 500). This will cause the tests to fail.
What to Observe/Learn:
Pay close attention to the output of ci.sh. Notice how the script stops immediately upon encountering an error (either a build error or a test failure) and explicitly tells you what went wrong. This immediate feedback loop is the core benefit of CI!
Common Pitfalls & Troubleshooting
Even with a simplified setup, things can sometimes go wrong. Here are a few common issues and how to approach them:
“No such file or directory” during
docker build:- Cause: You’re trying to
COPYa file that doesn’t exist in the build context, or yourWORKDIRis incorrect. - Fix: Double-check your
DockerfileCOPYcommands and ensure the files exist in the directory where you’re runningdocker build. Also, verify yourWORKDIRis set correctly beforeCOPYcommands.
- Cause: You’re trying to
pip installerrors:- Cause: Typo in
requirements.txt, incorrect package name, or network issues preventingpipfrom reaching PyPI. - Fix: Carefully check
requirements.txt. Try runningpip install -r requirements.txtlocally to confirm dependencies can be installed. Ensure your Docker host has internet access.
- Cause: Typo in
Tests not running or failing unexpectedly:
- Cause: Incorrect
CMDinDockerfile.test, test file not copied, or the application itself has a bug. - Fix:
- Verify the
CMDinDockerfile.testis correct (CMD ["python", "-m", "unittest", "test_app.py"]). - Confirm
app.pyandtest_app.pyare copied into the test image (COPY . .or explicitCOPYcommands). - Temporarily change
CMDinDockerfile.testtoCMD ["bash"], thendocker run -it my-flask-app-test:latest bashto get an interactive shell inside the container. From there, you can manually navigate to/appand try runningpython test_app.pyto debug the test execution directly.
- Verify the
- Cause: Incorrect
ci.shscript not executable:- Cause: You forgot to make the script executable.
- Fix: Run
chmod +x ci.shbefore executing it with./ci.sh.
Docker Desktop Not Running (as of Dec 2025):
- Cause: Docker Desktop (version ~4.26, released Nov 2025) might not be running or might be in a bad state.
- Fix: Ensure the Docker Desktop application is open and running in your system tray/menubar. If issues persist, try restarting Docker Desktop or checking its diagnostics. Refer to the official Docker Desktop documentation for your OS: https://docs.docker.com/desktop/
Summary: Your First Steps into Automated Workflows
Congratulations! You’ve just built a simplified but functional CI pipeline using Docker. This chapter covered some significant ground:
- Understanding CI/CD: You learned that Continuous Integration is about automating builds and tests to ensure your codebase is always healthy, and how it leads to faster, more reliable development.
- Docker’s Role in CI/CD: You saw firsthand how Docker provides consistent, isolated, and reproducible environments, which are absolutely critical for reliable CI/CD pipelines. “Works on my machine” becomes “Works in my Docker container, therefore it works everywhere.”
- Practical Implementation: You set up a simple Flask application, created a test suite, crafted dedicated Dockerfiles for building the application and running tests, and finally, automated the entire process with a shell script.
- Troubleshooting: You’re now equipped to diagnose common issues that might arise in such a pipeline.
This local pipeline is a fantastic stepping stone. In the real world, you’d integrate these Docker commands into dedicated CI/CD platforms like GitHub Actions, GitLab CI, Jenkins, or CircleCI. These platforms provide more advanced features like parallel test execution, deployment to various cloud providers, and sophisticated reporting.
But the fundamental concepts you’ve mastered here – containerizing your build and test environments – remain exactly the same. You’ve laid a strong foundation for building robust, automated software delivery pipelines.
What’s Next? In the next chapter, we’ll explore even more advanced Docker networking scenarios, diving deeper into how containers communicate in complex multi-service architectures. Get ready to connect everything!