Introduction
Welcome to Chapter 13! In our journey to master Apple’s native Linux container tools on macOS, we’ve explored everything from setting up your environment to building custom images and understanding networking. Now, it’s time to put all that knowledge into action!
This chapter is all about building a practical, full-stack web application. We’ll create a simple “Todo List” application, but the real star of the show will be how we containerize each piece: a PostgreSQL database, a Node.js Express backend API, and a React frontend. You’ll learn how these different services communicate when running in separate containers, how to manage persistent data for your database, and how to orchestrate their startup using the container CLI.
By the end of this project, you’ll have a solid understanding of how to use Apple’s container tools to set up a complete development environment for multi-service applications, boosting your confidence and practical skills. Get ready to build something awesome!
Prerequisites: Before we dive in, make sure you’re comfortable with:
- Using the
containerCLI for basic image pulling and running. - Creating
Dockerfiles to build custom images. - Understanding container networking and volumes (Chapters 9-11).
- Basic Node.js, Express, and React concepts.
Let’s get started!
Core Concepts for Multi-Service Applications
Building a full-stack application with containers involves a few key ideas that bring all our previous learnings together.
1. The Multi-Container Mindset
When you build a full-stack application, you’re usually dealing with several distinct services: a database, a backend API, and a frontend. Instead of trying to cram them all into one giant container (which is generally a bad idea and goes against containerization principles), we’ll treat each service as its own independent container.
Why separate containers?
- Isolation: Each service runs in its own isolated environment, with its own dependencies and configurations.
- Scalability: You can scale individual services independently. If your backend is getting hammered, you can spin up more backend containers without affecting the database or frontend.
- Maintainability: Updates or changes to one service don’t necessarily require rebuilding or redeploying the others.
- Resource Management: You can allocate specific resources (CPU, memory) to each service.
2. Inter-Container Networking
If each service is in its own container, how do they talk to each other? They can’t just use localhost anymore! This is where container networks come in.
The container CLI, much like other container runtimes, allows you to create custom networks. When you connect multiple containers to the same custom network, they can communicate with each other using their container names as hostnames. This is super powerful because it abstracts away the underlying IP addresses.
Imagine a private chat room for your containers. They all join the “todo-app-network” and can then refer to each other by name, like “database” or “backend-api,” rather than needing to know specific IP addresses.
3. Persistent Data with Volumes
For services like databases, simply running them in a container isn’t enough. What happens if the container stops or is deleted? All your data would be gone! This is where volumes become crucial.
A volume provides a way to persist data generated by and used by containers. It’s a special kind of directory that lives outside the container’s filesystem and can be mounted into the container. This means your database’s data files will be stored safely on your Mac’s filesystem, even if the PostgreSQL container itself is recreated.
4. Project Structure: A Monorepo Approach
For this project, we’ll adopt a monorepo structure. This means all our code (frontend, backend, database configuration) will live in a single Git repository. While not strictly necessary for containerization, it simplifies development and dependency management for small to medium-sized projects.
Here’s what our project structure will look like:
todo-fullstack-app/
├── backend/
│ ├── src/
│ ├── Dockerfile
│ └── package.json
├── frontend/
│ ├── src/
│ ├── public/
│ ├── Dockerfile
│ └── package.json
└── data/ (for PostgreSQL persistent storage)
This setup keeps everything neatly organized and easy to manage with container CLI commands.
Step-by-Step Implementation: Building Our Todo App
Let’s start building our full-stack Todo application! We’ll go service by service.
Step 1: Initialize the Project Structure
First, create the main project directory and its subdirectories:
mkdir todo-fullstack-app
cd todo-fullstack-app
mkdir backend frontend data
Now you should have:
todo-fullstack-app/
├── backend/
├── frontend/
└── data/
Step 2: Set up the PostgreSQL Database Container
Our database will be the first service. We’ll use the official PostgreSQL image.
2.1 Create a Custom Container Network
Let’s create a network that all our services will join. This allows them to communicate securely.
container network create todo-app-network
Explanation:
container network create: This command instructs thecontainerCLI to create a new network.todo-app-network: This is the name we’re giving our network. You can choose any descriptive name.
You can verify its creation with container network ls.
2.2 Run the PostgreSQL Container
Now, let’s run a PostgreSQL container, connecting it to our new network and ensuring data persistence. We’ll use PostgreSQL version 16, a recent stable release.
container run \
--name todo-db \
--network todo-app-network \
-e POSTGRES_DB=todos \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=password \
-v "$(pwd)/data:/var/lib/postgresql/data" \
-d postgres:16
Explanation:
container run: The command to run a container.--name todo-db: Assigns a human-readable nametodo-dbto this container. This name will be crucial for other containers to connect to it via the network.--network todo-app-network: Connects ourtodo-dbcontainer to the network we just created.-e POSTGRES_DB=todos: Sets thePOSTGRES_DBenvironment variable inside the container. This tells PostgreSQL to create a database namedtodosupon startup.-e POSTGRES_USER=admin: Sets the database user toadmin.-e POSTGRES_PASSWORD=password: Sets the password for theadminuser. Warning: Use strong, secret passwords in production! For local development, this is fine.-v "$(pwd)/data:/var/lib/postgresql/data": This is the volume mount.$(pwd)/data: Refers to thedatadirectory we created in ourtodo-fullstack-appproject root on your Mac. The$(pwd)command ensures the full path is used.:/var/lib/postgresql/data: This is the standard path where PostgreSQL stores its data inside the container.- The colon
:separates the host path from the container path. This means any changes to/var/lib/postgresql/datawithin the container will be reflected in yourdatadirectory on your Mac, and vice-versa, ensuring persistence.
-d: Runs the container in “detached” mode, meaning it runs in the background.postgres:16: Specifies the image to use, the official PostgreSQL image with tag16.
After running this, you can check if the container is running with container ps. Give it a few moments to start up.
Step 3: Set up the Backend API (Node.js/Express)
Next, we’ll build a simple Node.js Express API that connects to our PostgreSQL database.
3.1 Create the Backend Application
Navigate into the backend directory:
cd backend
Initialize a Node.js project:
npm init -y
Install necessary packages: express for the web server and pg for PostgreSQL client.
npm install express pg
Now, create a file named src/index.js (you might need to create the src folder first) with the following content:
// backend/src/index.js
const express = require('express');
const { Pool } = require('pg');
const app = express();
const port = process.env.PORT || 3001; // Default to 3001
// Database connection pool
const pool = new Pool({
user: process.env.POSTGRES_USER || 'admin',
host: process.env.POSTGRES_HOST || 'todo-db', // Use the container name as host
database: process.env.POSTGRES_DB || 'todos',
password: process.env.POSTGRES_PASSWORD || 'password',
port: process.env.POSTGRES_PORT || 5432,
});
app.use(express.json());
// Test database connection
app.get('/api/health', async (req, res) => {
try {
const client = await pool.connect();
await client.query('SELECT 1'); // Simple query to check connection
client.release();
res.status(200).json({ status: 'ok', database: 'connected' });
} catch (err) {
console.error('Database connection error:', err.message);
res.status(500).json({ status: 'error', database: 'disconnected', message: err.message });
}
});
// Get all todos
app.get('/api/todos', async (req, res) => {
try {
const result = await pool.query('SELECT id, description, completed FROM todos ORDER BY id ASC');
res.json(result.rows);
} catch (err) {
console.error('Error fetching todos:', err.message);
res.status(500).json({ error: 'Failed to fetch todos' });
}
});
// Add a new todo
app.post('/api/todos', async (req, res) => {
const { description } = req.body;
if (!description) {
return res.status(400).json({ error: 'Description is required' });
}
try {
const result = await pool.query(
'INSERT INTO todos (description) VALUES ($1) RETURNING id, description, completed',
[description]
);
res.status(201).json(result.rows[0]);
} catch (err) {
console.error('Error adding todo:', err.message);
res.status(500).json({ error: 'Failed to add todo' });
}
});
// Update todo status
app.put('/api/todos/:id', async (req, res) => {
const { id } = req.params;
const { completed } = req.body;
if (typeof completed !== 'boolean') {
return res.status(400).json({ error: 'Completed status (boolean) is required' });
}
try {
const result = await pool.query(
'UPDATE todos SET completed = $1 WHERE id = $2 RETURNING id, description, completed',
[completed, id]
);
if (result.rows.length === 0) {
return res.status(404).json({ error: 'Todo not found' });
}
res.json(result.rows[0]);
} catch (err) {
console.error('Error updating todo:', err.message);
res.status(500).json({ error: 'Failed to update todo' });
}
});
// Initial table creation (run once)
async function createTable() {
try {
const client = await pool.connect();
await client.query(`
CREATE TABLE IF NOT EXISTS todos (
id SERIAL PRIMARY KEY,
description VARCHAR(255) NOT NULL,
completed BOOLEAN DEFAULT FALSE
);
`);
client.release();
console.log('Todos table ensured.');
} catch (err) {
console.error('Error creating todos table:', err.message);
}
}
// Start the server
app.listen(port, () => {
console.log(`Backend API running on port ${port}`);
createTable(); // Ensure table exists on startup
});
Explanation of src/index.js:
- We’re creating a simple Express app with routes for
health,GET todos,POST todo, andPUT todo/:id. - The
pgclient connects to PostgreSQL. Crucially,host: process.env.POSTGRES_HOST || 'todo-db'usestodo-dbas the hostname. Remember, this is the name we gave our PostgreSQL container, and it works because both containers are ontodo-app-network. - Environment variables are used for database credentials, making our container flexible.
- A
createTablefunction ensures ourtodostable exists in the database when the backend starts.
3.2 Create the Backend Dockerfile
Now, create a Dockerfile in the backend directory:
# backend/Dockerfile
# Use a recent official Node.js LTS image as the base
FROM node:20-alpine AS build
# Set the working directory inside the container
WORKDIR /app
# Copy package.json and package-lock.json (if exists)
# This allows npm install to leverage Docker layer caching
COPY package*.json ./
# Install Node.js dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose the port the backend server listens on
EXPOSE 3001
# Command to run the application
CMD ["node", "src/index.js"]
Explanation of Dockerfile:
FROM node:20-alpine: We’re using Node.js version 20 (an LTS release as of 2026-02-25) with the lightweight Alpine Linux distribution.WORKDIR /app: Sets/appas the default directory for subsequent commands.COPY package*.json ./: Copiespackage.jsonandpackage-lock.json(if present) to the working directory. We do this first sonpm installcan be cached if only code changes.RUN npm install: Installs all project dependencies.COPY . .: Copies the rest of our backend application code into the container.EXPOSE 3001: Informscontainerthat the container listens on port 3001. This is just documentation; it doesn’t actually publish the port.CMD ["node", "src/index.js"]: Specifies the command to run when the container starts.
3.3 Build the Backend Image
From inside the backend directory:
container build -t todo-backend:1.0 .
Explanation:
container build: Command to build an image.-t todo-backend:1.0: Tags the image with the nametodo-backendand version1.0..: Specifies that theDockerfileis in the current directory.
3.4 Run the Backend Container
Now, let’s run our backend, connecting it to our network and exposing its port.
First, navigate back to the root todo-fullstack-app directory:
cd ..
Then, run the backend container:
container run \
--name todo-backend \
--network todo-app-network \
-e POSTGRES_HOST=todo-db \
-e POSTGRES_USER=admin \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=todos \
-p 3001:3001 \
-d todo-backend:1.0
Explanation:
--name todo-backend: Names our backend containertodo-backend.--network todo-app-network: Connects it to the same network as the database.-e POSTGRES_HOST=todo-db: Crucially, tells our backend where to find the database using its container name.-e ...: Passes the database credentials as environment variables to the backend container.-p 3001:3001: Maps port 3001 inside the container to port 3001 on your Mac host. This allows you to access the backend API from your Mac’s browser orcurl.-d todo-backend:1.0: Runs thetodo-backend:1.0image in detached mode.
You can test the backend by opening your browser to http://localhost:3001/api/health. You should see {"status":"ok","database":"connected"}. Congratulations, your backend is talking to your database!
Step 4: Set up the Frontend (React/Vite)
Finally, let’s create a simple React frontend that consumes our backend API.
4.1 Create the Frontend Application
Navigate into the frontend directory:
cd frontend
Create a new React project using Vite (a fast build tool):
npm create vite@latest . -- --template react
When prompted, confirm to install create-vite and choose JavaScript or TypeScript (we’ll assume JavaScript for simplicity here, but TypeScript is also excellent!).
Now, install the dependencies:
npm install
Modify src/App.jsx to fetch and display todos. Replace the content of frontend/src/App.jsx with the following:
// frontend/src/App.jsx
import { useState, useEffect } from 'react';
import './App.css';
function App() {
const [todos, setTodos] = useState([]);
const [newTodo, setNewTodo] = useState('');
const [loading, setLoading] = useState(true);
const [error, setError] = useState(null);
// The API URL will be an environment variable set by the Dockerfile/container run
const API_BASE_URL = import.meta.env.VITE_API_URL || 'http://localhost:3001';
useEffect(() => {
fetchTodos();
}, []);
const fetchTodos = async () => {
setLoading(true);
setError(null);
try {
const response = await fetch(`${API_BASE_URL}/api/todos`);
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const data = await response.json();
setTodos(data);
} catch (err) {
console.error('Error fetching todos:', err);
setError('Failed to fetch todos. Is the backend running?');
} finally {
setLoading(false);
}
};
const addTodo = async (e) => {
e.preventDefault();
if (!newTodo.trim()) return;
try {
const response = await fetch(`${API_BASE_URL}/api/todos`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ description: newTodo }),
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const data = await response.json();
setTodos([...todos, data]);
setNewTodo('');
} catch (err) {
console.error('Error adding todo:', err);
setError('Failed to add todo.');
}
};
const toggleTodo = async (id, completed) => {
try {
const response = await fetch(`${API_BASE_URL}/api/todos/${id}`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ completed: !completed }),
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const data = await response.json();
setTodos(todos.map((todo) => (todo.id === id ? data : todo)));
} catch (err) {
console.error('Error toggling todo:', err);
setError('Failed to update todo.');
}
};
return (
<div className="App">
<h1>My Containerized Todo List</h1>
<form onSubmit={addTodo}>
<input
type="text"
value={newTodo}
onChange={(e) => setNewTodo(e.target.value)}
placeholder="Add a new todo"
/>
<button type="submit">Add Todo</button>
</form>
{loading && <p>Loading todos...</p>}
{error && <p style={{ color: 'red' }}>{error}</p>}
<ul>
{todos.map((todo) => (
<li key={todo.id}>
<input
type="checkbox"
checked={todo.completed}
onChange={() => toggleTodo(todo.id, todo.completed)}
/>
<span style={{ textDecoration: todo.completed ? 'line-through' : 'none' }}>
{todo.description}
</span>
</li>
))}
</ul>
</div>
);
}
export default App;
Explanation of src/App.jsx:
- This is a standard React component that manages a list of todos.
- It fetches todos from
API_BASE_URL/api/todosand allows adding and toggling completion status. - Crucially,
API_BASE_URLis read fromimport.meta.env.VITE_API_URL. This allows us to inject the backend API’s internal container network address during the build process, or uselocalhostfor local development.
4.2 Create the Frontend Dockerfile
We’ll create a Dockerfile that builds our React app and then serves it using a simple static web server (like serve).
First, install serve as a production dependency:
npm install serve
Now, create a Dockerfile in the frontend directory:
# frontend/Dockerfile
# Stage 1: Build the React application
FROM node:20-alpine AS build
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Build the React app for production
# VITE_API_URL environment variable is crucial here!
# It will be provided during the container build process or via container run.
ARG VITE_API_URL_ARG
ENV VITE_API_URL=$VITE_API_URL_ARG
RUN npm run build
# Stage 2: Serve the built application with a lightweight web server
FROM alpine:latest
WORKDIR /app
# Install 'serve' globally for static file serving
# This needs to be installed in the final image, not just the build stage
RUN apk add --no-cache nodejs npm
RUN npm install -g serve
# Copy the built React app from the build stage
COPY --from=build /app/dist ./dist
# Expose the port the static server will listen on
EXPOSE 5173 # Vite's default dev server port, or serve's default if not specified
# Command to run the static web server
CMD ["serve", "-s", "dist", "-l", "5173"]
Explanation of Dockerfile:
- Multi-stage build: We use two
FROMstatements to create a smaller final image.- Stage 1 (
build): Usesnode:20-alpineto install dependencies and build the React app. ARG VITE_API_URL_ARGandENV VITE_API_URL=$VITE_API_URL_ARG: This is how we pass the backend API URL into the Vite build process.ARGdefines a build-time variable, andENVsets an environment variable inside the container during build. Vite usesVITE_prefixed environment variables.RUN npm run build: This command compiles the React application into static files (usually in adistfolder).- Stage 2: Uses a minimal
alpine:latestimage. RUN apk add --no-cache nodejs npmandRUN npm install -g serve: Installs Node.js and theservepackage, which is a simple static file server.COPY --from=build /app/dist ./dist: Copies only the compiled static files from thebuildstage to our final, lean image.EXPOSE 5173: The portservewill listen on.CMD ["serve", "-s", "dist", "-l", "5173"]: Starts theservecommand to host thedistdirectory on port 5173.
- Stage 1 (
4.3 Build the Frontend Image
From inside the frontend directory:
container build -t todo-frontend:1.0 \
--build-arg VITE_API_URL_ARG=http://todo-backend:3001 .
Explanation:
--build-arg VITE_API_URL_ARG=http://todo-backend:3001: This is crucial! We’re passing the URL for our backend API as seen from inside the frontend container. Since bothtodo-backendandtodo-frontendare ontodo-app-network, the frontend can reach the backend usinghttp://todo-backend:3001.
4.4 Run the Frontend Container
Navigate back to the root todo-fullstack-app directory:
cd ..
Then, run the frontend container:
container run \
--name todo-frontend \
--network todo-app-network \
-p 5173:5173 \
-d todo-frontend:1.0
Explanation:
--name todo-frontend: Names our frontend containertodo-frontend.--network todo-app-network: Connects it to the same network.-p 5173:5173: Maps port 5173 inside the container to port 5173 on your Mac host.-d todo-frontend:1.0: Runs thetodo-frontend:1.0image in detached mode.
Step 5: Verify the Full-Stack Application
Now, all three services should be running!
Check container status:
container psYou should see
todo-db,todo-backend, andtodo-frontendall listed asrunning.Access the frontend: Open your web browser and navigate to
http://localhost:5173.
You should see your “My Containerized Todo List” application! Try adding new todos, marking them complete, and refreshing the page. The data should persist because it’s stored in your PostgreSQL database, which uses a volume.
Congratulations! You’ve successfully built and deployed a full-stack application using Apple’s native Linux container tools.
Visualizing the Architecture
Let’s look at a diagram to understand how our containers interact:
Explanation of the Diagram:
- Your browser on your Mac communicates with the
todo-frontendcontainer vialocalhost:5173. - The
todo-frontendcontainer then makes API calls totodo-backend:3001(using its internal network name). - The
todo-backendcontainer connects totodo-db:5432(again, using its internal network name). - All three containers (
Frontend,Backend,Database) are connected to thetodo-app-network, allowing them to communicate. - The
Databasecontainer uses a volume mounted from your Mac’s./datadirectory to persist its data.
This clearly illustrates the power of container networking and volumes for multi-service applications.
Mini-Challenge: Add a “Delete Todo” Feature
You’ve built the core functionality. Now, let’s enhance it!
Challenge: Add a “Delete” button next to each todo item in the frontend. When clicked, it should:
- Send a
DELETErequest to a new backend endpoint (e.g.,/api/todos/:id). - Remove the todo from the database.
- Update the frontend to reflect the change.
Hint:
- You’ll need to add a new route in
backend/src/index.jsto handleDELETE /api/todos/:id. - You’ll also need to modify
frontend/src/App.jsxto render a delete button and call the new backend endpoint. - Remember to rebuild and rerun your
todo-backendandtodo-frontendcontainers after making changes!
What to Observe/Learn: This challenge reinforces the full-stack development cycle with containers: modify code, rebuild images, and rerun containers. It also helps solidify your understanding of API design and frontend-backend interaction.
Common Pitfalls & Troubleshooting
Working with multi-container applications can sometimes lead to tricky issues. Here are a few common pitfalls and how to troubleshoot them:
Container Connectivity Issues (e.g., “Connection refused”):
- Problem: Your backend can’t connect to the database, or your frontend can’t connect to the backend.
- Check:
- Are all containers on the same
todo-app-network? Usecontainer inspect <container_name>to verify network settings. - Are you using the correct container names as hostnames (e.g.,
todo-db,todo-backend) within your application code? - Are the ports correct (e.g., PostgreSQL typically runs on 5432, Express on 3001)?
- Check container logs for connection errors:
container logs <container_name>.
- Are all containers on the same
Persistent Data Not Working / Data Lost (e.g., database empty after restart):
- Problem: Your database data disappears when you restart the
todo-dbcontainer. - Check:
- Is the volume correctly mounted? Verify the
-v "$(pwd)/data:/var/lib/postgresql/data"syntax. Ensure$(pwd)resolves to the correct absolute path on your host. - Check permissions on your host
datadirectory. The user inside the container might not have write access. You might need to adjust permissions on your Mac usingchmod(e.g.,sudo chmod -R 777 datafor testing, but be cautious in production).
- Is the volume correctly mounted? Verify the
- Problem: Your database data disappears when you restart the
Frontend Not Displaying Data (but backend
healthendpoint works):- Problem: Your frontend loads, but no todos appear, and there might be network errors in your browser’s developer console.
- Check:
- Did you rebuild the
todo-frontendimage with the correctVITE_API_URL_ARG? The frontend needs to know where to find the backend at build time for Vite. - Is the backend container actually serving data on the
/api/todosendpoint? Test it directly withcurl http://localhost:3001/api/todosfrom your Mac. - Check for CORS (Cross-Origin Resource Sharing) errors in your browser’s console. While not explicitly added here, if your frontend and backend were on different domains (not just different ports on
localhost), you’d need to configure CORS in your Express backend. For our setup,localhost:5173talking tolocalhost:3001typically doesn’t trigger CORS issues, but it’s a common full-stack problem.
- Did you rebuild the
Port Conflicts:
- Problem:
container runfails with an error like “port already in use.” - Check: Another process on your Mac (or another container) is already using the host port you’re trying to map (
3001or5173). - Solution: Stop the conflicting process/container, or choose a different host port mapping (e.g.,
-p 3002:3001).
- Problem:
Remember, the container logs <container_name> command is your best friend for debugging what’s happening inside your containers!
Summary
Phew! You’ve just completed a significant project, building a full-stack web application entirely powered by Apple’s native Linux container tools. Here are the key takeaways from this chapter:
- Multi-Service Architecture: You learned how to break down a complex application into independent, containerized services (database, backend, frontend).
- Container Networking: You mastered creating and using custom
containernetworks to enable seamless communication between your services using container names as hostnames. - Persistent Storage: You effectively used
containervolumes to ensure your database’s data persists across container restarts, a critical aspect of stateful applications. - Full-Stack Workflow: You experienced the complete development cycle for a containerized application, from writing code and Dockerfiles to building images and running containers.
containerCLI in Action: You gained hands-on experience orchestrating multiple containers and managing their configurations using variouscontainerCLI commands.
This project demonstrates the immense power and flexibility that Apple’s container tools bring to macOS developers. You’re now equipped to tackle more complex containerized projects and streamline your development workflows!
What’s Next?
In the next chapter, we’ll explore more advanced topics, perhaps diving into multi-container orchestration with container compose (if available in future releases) or integrating these tools into CI/CD pipelines.
References
- Apple Container GitHub Repository
- PostgreSQL Official Docker Hub Page
- Node.js Official Docker Hub Page
- Vite Official Documentation
- Express.js Official Documentation
- Node-Postgres (pg) Documentation
This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.