Chapter Introduction
Welcome to Chapter 2 of our Node.js backend journey! In this chapter, we’ll take a fundamental leap towards building production-ready applications by containerizing our Node.js service using Docker and orchestrating its local development environment with Docker Compose. This step is crucial for ensuring consistency across development, testing, and production environments, eliminating the dreaded “it works on my machine” syndrome.
We will start by creating a simple Fastify application, then define a Dockerfile to package it into a lightweight, isolated container image. Following this, we’ll introduce docker-compose.yml to define and run multi-container Docker applications, setting the stage for integrating databases and other services in future chapters. By the end of this chapter, you’ll have your Node.js application running reliably inside Docker containers, ready for scalable deployment.
Prerequisites:
- Basic Node.js project structure from Chapter 1 (or a simple
package.jsonandapp.js). - Docker Desktop (or Docker Engine) installed and running on your system.
- Familiarity with basic terminal commands.
Expected Outcome:
Your Node.js application will be successfully containerized and runnable via Docker. You will be able to start and stop your application and its dependencies (even if only a placeholder database for now) using a single docker compose up command, providing a consistent and isolated development environment.
Planning & Design
Before diving into the code, let’s visualize our local development environment architecture and establish a clear file structure.
Component Architecture
Our local setup will involve two primary services orchestrated by Docker Compose: our Node.js API and a placeholder for a database service. This modular approach allows for independent scaling and management of each component.
File Structure
We’ll maintain a clean, organized project structure. The key additions in this chapter will be the Dockerfile and docker-compose.yml at the project root, alongside a basic src/app.js and package.json.
/your-project-root
├── src/
│ └── app.js # Our Fastify application entry point
├── package.json # Node.js project dependencies and scripts
├── .env # Environment variables for local development
├── .dockerignore # Files/folders to exclude from Docker image
├── Dockerfile # Instructions to build our Node.js Docker image
└── docker-compose.yml # Orchestration for multi-container applications
Step-by-Step Implementation
Let’s begin by setting up our basic Node.js application and then containerizing it.
1. Setup: Initialize Node.js Project & Basic Fastify App
First, ensure you have a basic Node.js project. If you’re starting fresh or want to re-verify, follow these steps.
a) Create Project Directory and package.json
Navigate to your desired project directory and initialize a new Node.js project.
mkdir my-nodejs-api
cd my-nodejs-api
npm init -y
b) Install Fastify
We’ll use Fastify for its performance and developer experience.
npm install fastify pino
fastify: The web framework itself.pino: A very fast Node.js logger, often used with Fastify, which promotes logging best practices by logging tostdout/stderr.
c) Create src/app.js
This will be our simple Fastify “Hello World” application.
Create a src directory and then app.js inside it.
mkdir src
touch src/app.js
Now, add the following code to src/app.js:
// src/app.js
import Fastify from 'fastify';
import pino from 'pino';
// Initialize Fastify with a production-ready logger
// For production, pino logs JSON to stdout/stderr which is easily consumed by log aggregators.
// For development, we can use a prettyfier to make logs readable.
const fastify = Fastify({
logger: pino({
level: process.env.LOG_LEVEL || 'info', // Default log level
transport: process.env.NODE_ENV !== 'production' ? {
target: 'pino-pretty',
options: {
colorize: true,
translateTime: 'SYS:HH:MM:ss Z',
ignore: 'pid,hostname',
},
} : undefined, // In production, log raw JSON
}),
});
// Register a simple health check route
fastify.get('/health', async (request, reply) => {
reply.send({ status: 'ok', timestamp: new Date().toISOString() });
});
// Declare a route
fastify.get('/', async (request, reply) => {
request.log.info('Root endpoint accessed'); // Use the request logger
reply.send({ message: 'Hello from your Fastify API!' });
});
// Centralized error handling
fastify.setErrorHandler((error, request, reply) => {
request.log.error({ error }, 'An error occurred');
reply.status(error.statusCode || 500).send({
success: false,
message: error.message || 'Internal Server Error',
code: error.statusCode || 500,
});
});
// Start the server
const start = async () => {
try {
const port = process.env.PORT || 3000;
const host = process.env.HOST || '0.0.0.0'; // Listen on all interfaces for Docker
await fastify.listen({ port: parseInt(port, 10), host });
fastify.log.info(`Server listening on ${host}:${port}`);
} catch (err) {
fastify.log.error(err, 'Server failed to start');
process.exit(1);
}
};
start();
// Handle graceful shutdown
process.on('SIGINT', async () => {
fastify.log.info('SIGINT received, shutting down gracefully...');
await fastify.close();
fastify.log.info('Server closed');
process.exit(0);
});
process.on('SIGTERM', async () => {
fastify.log.info('SIGTERM received, shutting down gracefully...');
await fastify.close();
fastify.log.info('Server closed');
process.exit(0);
});
Explanation:
- We use
importsyntax, so we’ll add"type": "module"topackage.jsonshortly. - Fastify is initialized with
pinofor robust logging. In development,pino-prettymakes logs human-readable; in production, raw JSON logs are ideal for log aggregation services. - A
/healthendpoint is added, which is standard for containerized applications and useful for load balancers and orchestrators. - A basic
/route returns a “Hello” message. - Centralized error handling ensures all unhandled errors are caught and returned consistently.
- The server listens on
0.0.0.0to be accessible from outside the container. - Graceful shutdown handlers (
SIGINT,SIGTERM) ensure the server cleans up resources before exiting, crucial for production environments.
d) Update package.json
Add "type": "module" to enable ES module syntax and a start script.
// package.json
{
"name": "my-nodejs-api",
"version": "1.0.0",
"description": "A production-ready Node.js backend API",
"main": "src/app.js",
"type": "module", // Add this line
"scripts": {
"start": "node src/app.js", // Add this line
"dev": "node --watch src/app.js", // Optional: for development with auto-restart
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"fastify": "^4.x.x", // Use latest stable Fastify 4.x
"pino": "^8.x.x" // Use latest stable pino 8.x
},
"devDependencies": {
"pino-pretty": "^10.x.x" // Use latest stable pino-pretty 10.x
}
}
e) Create .env file
This file will hold environment variables for local development.
touch .env
Add the following to .env:
# .env
PORT=3000
HOST=0.0.0.0
LOG_LEVEL=debug
NODE_ENV=development
f) Test the Node.js Application Locally (Pre-Docker)
Before containerizing, ensure the app runs directly on your machine.
npm run dev # or npm start
Open your browser to http://localhost:3000 or use curl:
curl http://localhost:3000
curl http://localhost:3000/health
You should see {"message":"Hello from your Fastify API!"} and {"status":"ok", ...} respectively, and logs in your terminal. Press Ctrl+C to stop the server.
2. Core Implementation: Dockerfile
Now, let’s create our Dockerfile to build a Docker image for our Node.js application. We’ll use a multi-stage build for a smaller, more secure production image.
a) Create .dockerignore
Similar to .gitignore, this file tells Docker which files and directories to exclude when building the image. This prevents unnecessary files (like node_modules from your host, editor configs, etc.) from being copied into the image, resulting in smaller and faster builds.
touch .dockerignore
Add the following to .dockerignore:
# .dockerignore
node_modules
npm-debug.log
.env
.git
.gitignore
.vscode
.DS_Store
Dockerfile
docker-compose.yml
README.md
b) Create Dockerfile
touch Dockerfile
Add the following content to Dockerfile:
# Dockerfile
# --- Stage 1: Build Stage ---
# This stage installs dependencies and builds the application.
# We use a Node.js image with a specific version (LTS recommended)
FROM node:20-alpine AS builder
# Set the working directory inside the container
WORKDIR /app
# Copy package.json and package-lock.json first to leverage Docker layer caching.
# This step is only re-run if package.json or package-lock.json changes.
COPY package*.json ./
# Install dependencies. Use `npm ci` for clean installs in CI/CD environments,
# ensuring exact versions from package-lock.json.
# For production builds, we typically skip dev dependencies.
RUN npm ci --only=production
# Copy the rest of the application source code
COPY src src/
# --- Stage 2: Production Stage ---
# This stage creates a minimal image with only the necessary runtime components.
FROM node:20-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy only the production dependencies and built application from the builder stage
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/src src/
COPY package.json ./ # Copy package.json to run scripts if needed, but not for dependencies
# Expose the port on which the Fastify application will listen
EXPOSE 3000
# Set environment variables for the production environment
ENV NODE_ENV=production
ENV PORT=3000
ENV HOST=0.0.0.0
ENV LOG_LEVEL=info
# Command to run the application when the container starts
# Use `node --enable-source-maps` for better stack traces in production if transpiled code is used.
CMD ["node", "src/app.js"]
Explanation of Dockerfile Best Practices:
- Multi-Stage Build:
builderstage: Installs dependencies. This creates a layer with all build tools andnode_modules.- Final stage: Copies only the
node_modules(production-only) and source code from thebuilderstage. This significantly reduces the final image size by discarding development dependencies and build artifacts.
- Base Image:
node:20-alpineis chosen for its small size and Node.js 20 LTS. Alpine Linux is a lightweight distribution, ideal for containers. WORKDIR /app: Sets the current working directory for subsequent instructions.- Layer Caching: Copying
package*.jsonseparately beforenpm ciallows Docker to cache the dependency installation step. If only source code changes, Docker rebuilds from theCOPY src src/layer onwards, speeding up builds. npm ci --only=production: Ensures a clean installation of exact dependency versions specified inpackage-lock.jsonand only installs production dependencies.- Non-Root User (Future Improvement): For enhanced security, in production, we’d typically create a non-root user and run the application as that user (
USER node). We’ll introduce this in a later chapter when discussing production hardening. EXPOSE 3000: Documents that the container listens on port 3000. It doesn’t publish the port; that’s done withdocker run -pordocker-compose.yml.ENVVariables: Sets default environment variables inside the container. These can be overridden at runtime.CMD ["node", "src/app.js"]: The default command executed when the container starts. Using the array form (exec form) is preferred as it allows Docker to manage the process directly, ensuring proper signal handling.
3. Testing This Component: Build and Run Docker Image
Now, let’s build our Docker image and run a container from it.
a) Build the Docker Image
From your project root, run:
docker build -t my-nodejs-api:1.0.0 .
# -t : tags the image with a name and optional version (e.g., my-nodejs-api:1.0.0)
# . : specifies the build context (current directory, where Dockerfile is located)
You should see output indicating Docker building layers, installing dependencies, and finally tagging the image.
b) Run a Container from the Image
docker run -p 3000:3000 --name my-api-container my-nodejs-api:1.0.0
# -p 3000:3000 : maps host port 3000 to container port 3000
# --name my-api-container : gives a friendly name to the running container
c) Verify the Running Application
Open your browser to http://localhost:3000 or use curl:
curl http://localhost:3000
curl http://localhost:3000/health
You should see the “Hello” message and “ok” status. In your terminal, you’ll see the logs from the Fastify application running inside the container.
Press Ctrl+C to stop the container (if running in foreground). To remove the container: docker rm -f my-api-container.
4. Core Implementation: Docker Compose
Running individual Docker commands can become cumbersome for applications with multiple services (like a backend API, a database, a cache, etc.). Docker Compose simplifies this by allowing you to define and run multi-container Docker applications using a single YAML file.
a) Create docker-compose.yml
touch docker-compose.yml
Add the following content to docker-compose.yml:
# docker-compose.yml
version: '3.8' # Specify Docker Compose file format version
services:
# Node.js API Service
app:
build:
context: . # Build the image from the Dockerfile in the current directory
dockerfile: Dockerfile
container_name: my-nodejs-api-app
# Map host port 3000 to container port 3000
# The first port is the host port, the second is the container port.
ports:
- "3000:3000"
# Mount the local source code into the container for live reloading during development.
# This is often used for development, but remove or adjust for production builds.
# volumes:
# - ./src:/app/src
# - ./package.json:/app/package.json
# - ./node_modules:/app/node_modules # For dev, ensure node_modules are consistent
# Environment variables specific to the app service.
# We load them from a .env file at the project root.
env_file:
- ./.env
# Explicitly define networks if needed for more complex setups.
# networks:
# - backend-network
# Restart policy: always restart unless stopped or Docker daemon is stopped/restarted.
restart: unless-stopped
# Dependencies: this service depends on the db service being healthy.
# Docker Compose will start `db` before `app`.
# It does not wait for the DB to be "ready", only for its container to start.
depends_on:
db:
condition: service_healthy # Wait for healthcheck on the db service (defined below)
# Database Service (PostgreSQL placeholder)
db:
image: postgres:16-alpine # Using a lightweight Alpine-based PostgreSQL image
container_name: my-nodejs-api-db
# Environment variables for PostgreSQL configuration
environment:
POSTGRES_DB: ${DB_NAME:-api_db}
POSTGRES_USER: ${DB_USER:-user}
POSTGRES_PASSWORD: ${DB_PASSWORD:-password}
# Persist database data to a named volume to prevent data loss on container removal
volumes:
- db_data:/var/lib/postgresql/data
ports:
- "5432:5432" # Expose PostgreSQL port
# Health check for the database service
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-user} -d ${DB_NAME:-api_db}"]
interval: 5s
timeout: 5s
retries: 5
start_period: 10s # Give the DB some time to start before checking
restart: unless-stopped
# networks:
# - backend-network
# Define named volumes for data persistence
volumes:
db_data:
# Define networks if needed
# networks:
# backend-network:
# driver: bridge
Explanation of docker-compose.yml Best Practices:
version: '3.8': Specifies the Docker Compose file format version. Using a recent stable version ensures access to new features.services: Defines the individual containers that make up your application.appservice:build: Instructs Docker Compose to build an image from aDockerfilein the specifiedcontext.container_name: Assigns a static name for easier identification.ports: Maps host ports to container ports.env_file: Loads environment variables from a.envfile, keeping sensitive data out of thedocker-compose.ymlitself.restart: unless-stopped: Ensures containers automatically restart unless explicitly stopped.depends_on: Specifies service dependencies.service_healthycondition is crucial to wait for the database to be truly ready, not just started.
dbservice (PostgreSQL):image: Uses an officialpostgresDocker image.environment: Sets PostgreSQL-specific environment variables for database name, user, and password. Using${VAR:-default}syntax allows overriding defaults from the.envfile.volumes: Persists database data to a named Docker volume (db_data). This ensures your data isn’t lost when the container is removed or recreated.healthcheck: Defines how Docker should check if the database service is healthy. This is vital fordepends_on: service_healthy.
volumes: Defines named volumes for persistent data storage.networks(commented out): For more complex setups, you can define custom bridge networks to isolate services or connect them securely.
b) Update .env for Database Variables
Add database-related environment variables to your .env file.
# .env
PORT=3000
HOST=0.0.0.0
LOG_LEVEL=debug
NODE_ENV=development
# Database Configuration
DB_HOST=db # This is the service name in docker-compose.yml
DB_PORT=5432
DB_NAME=api_db
DB_USER=user
DB_PASSWORD=password
DB_HOST=db: Crucially, when containers communicate within a Docker Compose network, they use the service name (dbin this case) as the hostname.
5. Testing This Component: Run with Docker Compose
Now, let’s bring up our entire application stack using Docker Compose.
a) Start the Services
From your project root, run:
docker compose up --build -d
# --build : rebuilds images if Dockerfile or context has changed
# -d : runs containers in detached mode (in the background)
Docker Compose will first build your app image (if --build is used and necessary), then start the db service, wait for its health check to pass, and finally start your app service.
b) Verify Running Services
Check the status of your running containers:
docker ps
You should see both my-nodejs-api-app and my-nodejs-api-db containers listed as Up (healthy) or Up.
c) Verify the Application
Again, open your browser to http://localhost:3000 or use curl:
curl http://localhost:3000
curl http://localhost:3000/health
The application should respond correctly. You can view logs for a specific service:
docker compose logs app
docker compose logs db
d) Stop and Clean Up
When you’re done, stop and remove the containers:
docker compose down
# This stops containers, removes them, and removes default networks.
# To remove volumes (e.g., db_data), add -v:
# docker compose down -v
Production Considerations
While our current Dockerfile is a good start, here are some considerations for production:
- Non-Root User: Running containers as a non-root user (
USER nodein theDockerfile) significantly reduces the attack surface. We will implement this in a later security chapter. - Resource Limits: In production, you’d set CPU and memory limits for containers in your orchestration platform (e.g., Kubernetes, AWS ECS) or directly in
docker-compose.ymlfor local testing. - Health Checks: Our
docker-compose.ymlalready includes health checks for the database. For the Node.js app, the/healthendpoint is ready, but we’d integrate it into ahealthcheckblock indocker-compose.ymlor your orchestrator. - Secrets Management: Environment variables in
.envare suitable for local development but are not secure for production. We’ll explore dedicated secrets management solutions (e.g., AWS Secrets Manager, HashiCorp Vault) in later chapters. - Logging: Our Fastify app is configured to log JSON to
stdout/stderrin production, which is a Docker best practice. Log aggregators (like ELK stack, Datadog, CloudWatch Logs) can easily consume these structured logs.
Code Review Checkpoint
At this point, you should have the following files and changes:
src/app.js: Our Fastify application with logging, error handling, and graceful shutdown.package.json: Updated withtype: moduleandstart/devscripts..env: ContainsPORT,HOST,LOG_LEVEL,NODE_ENV, and placeholderDB_variables..dockerignore: Ensures only necessary files are included in the Docker image.Dockerfile: A multi-stage Dockerfile for building a lean, production-ready Node.js image.docker-compose.yml: Defines and orchestrates our Node.jsappservice and adb(PostgreSQL) service, including volumes and health checks.
Your project structure should look like this:
my-nodejs-api/
├── src/
│ └── app.js
├── .env
├── .dockerignore
├── Dockerfile
├── docker-compose.yml
└── package.json
These files together provide a robust, containerized foundation for our Node.js application, making local development consistent and setting us up for seamless deployment.
Common Issues & Solutions
“Error: listen EADDRINUSE: address already in use :::3000”
- Issue: Another process on your host machine (or a previous Docker container) is already using port 3000.
- Debugging:
- On Linux/macOS:
lsof -i :3000to find the process ID. - On Windows:
netstat -ano | findstr :3000thentasklist | findstr <PID>.
- On Linux/macOS:
- Solution: Stop the conflicting process, or change the host port mapping in
docker-compose.yml(e.g.,"8080:3000"). - Prevention: Always stop Docker Compose services with
docker compose downto free up ports.
“Cannot find module ‘pino-pretty’” or similar during
docker compose up- Issue:
pino-prettyis a development dependency, and ourDockerfileusesnpm ci --only=production. IfNODE_ENVis set todevelopmentinside the container or ifpino-prettyis needed for containerized development logging, it won’t be installed. - Debugging: Check
docker compose logs appfor details. VerifyNODE_ENVin.envandDockerfile. - Solution:
- Option 1 (Recommended): Ensure
NODE_ENVisproductionin the finalDockerfilestage andLOG_LEVEL=infosopino-prettyis not attempted. For local development,pino-prettyis installed on your host system and used when running vianpm run dev. - Option 2 (For containerized development with pretty logs): Modify
Dockerfileto install dev dependencies in the builder stage (RUN npm ciinstead ofnpm ci --only=production) or addpino-prettyto production dependencies (not recommended). Or, for local development, you can map your localnode_modulesinto the container using volumes, but this can lead to inconsistencies. For now, stick toonly=productionfor the Docker image and rely onpino-prettyon the host fornpm run dev.
- Option 1 (Recommended): Ensure
- Issue:
“db_1 | FATAL: database “api_db” does not exist” in database logs
- Issue: The
depends_onindocker-compose.ymlonly waits for the container to start, not for the database inside to be fully initialized and ready to accept connections or for the specified database to be created. - Debugging: Check
docker compose logs db. Ensure thehealthcheckfor thedbservice is properly configured and passing. - Solution: We’ve already implemented a
healthcheckfor thedbservice andcondition: service_healthyforapp’sdepends_on. This is the correct approach. If issues persist, increasestart_periodfor thedbhealth check or add a small delay in your application’s startup logic (thoughdepends_on: service_healthyis generally sufficient).
- Issue: The
Testing & Verification
To verify everything is working as expected from this chapter:
Start all services:
docker compose up --build -dEnsure both
appanddbservices start without errors and are reported asUp(andhealthyfordbafter a short period) bydocker ps.Access the API: Open your browser to
http://localhost:3000andhttp://localhost:3000/health. Confirm you receive the expected “Hello from your Fastify API!” and “ok” status messages.Check logs:
docker compose logs app docker compose logs dbVerify that the application logs are visible and make sense (e.g.,
Server listening on 0.0.0.0:3000,Root endpoint accessed). For the database, you should see startup logs indicating it’s ready.Stop and clean up:
docker compose down -vThis command will stop and remove the containers and also delete the
db_datavolume, ensuring a clean slate for the next run.
Summary & Next Steps
Congratulations! In this chapter, you’ve successfully containerized your Node.js Fastify application using Docker and orchestrated its local development environment with Docker Compose. You now have:
- A basic Fastify API with robust logging and error handling.
- A multi-stage
Dockerfileto build efficient and secure Docker images. - A
.dockerignorefile to optimize image size. - A
docker-compose.ymlfile to define and run your multi-service application (Node.js API + PostgreSQL database placeholder) with health checks and persistent storage. - A consistent, isolated development environment that mirrors production more closely.
This foundation is critical for building scalable, maintainable, and production-ready applications. With your API now running in a container, we can move on to building out its features without worrying about environment inconsistencies.
In Chapter 3: Setting Up API Routing & Database Connection, we will expand our Fastify application by defining more API routes and establish a connection to our PostgreSQL database, beginning the process of interacting with persistent data. We’ll also introduce a configuration management strategy to handle different environments.