Introduction

Welcome back, intrepid developer! In our journey through Testcontainers, we’ve explored its core concepts, set up basic tests with various services, and understood the magic it performs to give us clean, isolated environments. Now, it’s time to put all that knowledge into practice with a realistic, multi-service project.

In this chapter, we’ll build a simplified API Gateway and a backend service, both written in Node.js with TypeScript. The backend service will interact with a PostgreSQL database for persistence and a Redis cache for speed. Our mission? To craft robust integration tests for this entire stack using Testcontainers. This setup closely mimics common microservices architectures, giving you invaluable experience in tackling real-world testing challenges. We’ll ensure our tests are fast, reliable, and truly reflective of how these services will behave in production.

By the end of this chapter, you’ll be able to:

  • Understand and set up a multi-service testing environment with Testcontainers in Node.js.
  • Containerize custom applications (our backend and gateway) for testing.
  • Handle inter-container communication effectively.
  • Write comprehensive integration tests for an API Gateway and its downstream services.

Ready to build and test a mini-microservices ecosystem? Let’s dive in!

Core Concepts: Project Architecture and Testing Strategy

Before we write any code, let’s understand the architecture we’re building and how Testcontainers will fit into our testing strategy.

Project Architecture Overview

Imagine a simple application with two primary custom services:

  1. API Gateway: This is our entry point. It receives requests from clients and routes them to the appropriate backend service. It might also perform authentication, rate limiting, or even caching. For this project, it will act as a simple proxy.
  2. Backend Service: This service handles business logic. It will communicate with a PostgreSQL database to store and retrieve data, and a Redis cache to speed up common data access patterns.

Here’s a visual representation of how these components interact:

flowchart TD User[User/Test Runner] --> GatewayService[API Gateway] GatewayService --> BackendService(Backend Service) BackendService --> PostgreSQL[PostgreSQL Database] BackendService --> Redis[Redis Cache]

Our Node.js applications (API Gateway and Backend Service) will be running within their own Docker containers during testing, just like PostgreSQL and Redis. This provides a completely isolated and consistent environment for every test run.

Why Testcontainers is Crucial for This Setup

In a multi-service architecture like this, setting up testing environments can be a nightmare:

  • Database state: You need a fresh, empty database for each test to avoid test pollution.
  • Cache state: Similar to the database, the cache needs to be clean.
  • Service dependencies: How do your custom services find and connect to the database and cache? What if multiple developers run tests concurrently?
  • Version consistency: Ensuring everyone uses the same PostgreSQL or Redis version locally is hard.

Testcontainers elegantly solves these problems:

  • Disposable environments: Each test run spins up fresh, lightweight instances of PostgreSQL, Redis, and even our custom Node.js services, all within Docker containers.
  • Isolation: Tests run in complete isolation from each other and from your local machine’s services. No more “it works on my machine” excuses!
  • Realistic environment: You’re testing against actual PostgreSQL and Redis instances, not in-memory fakes or mocks that might behave differently. This drastically reduces the chances of subtle bugs appearing only in production.
  • Simplified setup: No need for complex local Docker Compose files for testing. Testcontainers handles the container lifecycle programmatically.

Our Testing Strategy

Our goal is to write integration tests. This means we won’t be unit testing individual functions within the API Gateway or Backend Service in this chapter. Instead, we’ll focus on:

  • Sending HTTP requests to the API Gateway.
  • Verifying that the requests are correctly routed to the Backend Service.
  • Ensuring the Backend Service correctly interacts with PostgreSQL (e.g., data is stored and retrieved) and Redis (e.g., data is cached and invalidated).
  • Validating the final HTTP response from the API Gateway.

Essentially, we’re testing the entire “slice” of our application stack that handles a particular request, from the user’s perspective down to the persistence layer.

Step-by-Step Implementation

Let’s start by setting up our project, then incrementally build our services and finally, our tests.

1. Project Setup and Dependencies

First, create a new directory for our project and initialize a Node.js project with TypeScript.

mkdir api-gateway-backend-project
cd api-gateway-backend-project
npm init -y
npm install typescript@5.x ts-node@10.x @types/node@20.x --save-dev
npm install express@4.x @types/express@4.x pg@8.x redis@4.x body-parser@1.x @types/body-parser@1.x --save
npm install jest@29.x @types/jest@29.x ts-jest@29.x --save-dev
npm install testcontainers@10.0.0 --save-dev # Forward-looking version for 2026-02-14

Note on Versions (as of 2026-02-14):

  • Node.js: We’ll assume a development environment running Node.js v20.x or later (LTS v20.x is still widely supported, with v21.x and v22.x being more current).
  • TypeScript: v5.x is the current major version.
  • Testcontainers: We’re using testcontainers@10.0.0. While 9.x is current as of early 2024, given the rapid development, v10.0.0 is a reasonable forward-looking estimate for early 2026. This emphasizes using the latest version as a best practice.
  • Other packages: Standard stable versions.

Next, create a tsconfig.json file for TypeScript configuration.

// tsconfig.json
{
  "compilerOptions": {
    "target": "es2020",
    "module": "commonjs",
    "rootDir": "./",
    "outDir": "./dist",
    "esModuleInterop": true,
    "forceConsistentCasingInFileNames": true,
    "strict": true,
    "skipLibCheck": true
  },
  "include": ["src/**/*.ts", "tests/**/*.ts"],
  "exclude": ["node_modules"]
}

Configure Jest by creating jest.config.ts in the root.

// jest.config.ts
import type { Config } from '@jest/types';

const config: Config.InitialOptions = {
  preset: 'ts-jest',
  testEnvironment: 'node',
  testMatch: ['<rootDir>/tests/**/*.test.ts'],
  setupFilesAfterEnv: [], // No global setup needed for Testcontainers
};

export default config;

Finally, add some scripts to your package.json:

// package.json (relevant section)
  "scripts": {
    "build": "tsc",
    "start:backend": "ts-node src/backend/server.ts",
    "start:gateway": "ts-node src/gateway/server.ts",
    "test": "jest --runInBand --forceExit"
  },

The --runInBand flag for Jest ensures tests run serially, which is often helpful for Testcontainers to avoid potential port conflicts or resource contention if you have many containers starting up concurrently (though Testcontainers is designed to handle this, for complex setups, serial execution can simplify debugging). --forceExit ensures Jest doesn’t hang if there are open handles (like DB connections).

2. Custom Dockerfiles for Our Services

To run our Node.js services within Testcontainers, we need Docker images for them. This means creating a Dockerfile for each.

Create a src directory, and within it, backend and gateway directories.

src/backend/Dockerfile:

# src/backend/Dockerfile
# Use a specific Node.js LTS version for consistency
FROM node:20-alpine AS build

# Set the working directory inside the container
WORKDIR /app

# Copy package.json and package-lock.json to install dependencies
COPY package*.json ./

# Install application dependencies
RUN npm install

# Copy the rest of the application source code
COPY . .

# Build the TypeScript application
RUN npm run build

# Start a new stage for the production image for smaller size
FROM node:20-alpine

# Set the working directory
WORKDIR /app

# Copy only necessary files from the build stage
COPY --from=build /app/package*.json ./
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist

# Expose the port the backend service will listen on
EXPOSE 3001

# Command to run the application
CMD ["node", "dist/backend/server.js"]

Explanation:

  • We use a multi-stage build to keep the final image small. The build stage installs dev dependencies and compiles TypeScript.
  • The final stage copies only the production dependencies and compiled JavaScript.
  • EXPOSE 3001 tells Docker that this container will listen on port 3001.
  • CMD ["node", "dist/backend/server.js"] is the command to start our compiled backend service.

src/gateway/Dockerfile:

# src/gateway/Dockerfile
# Use a specific Node.js LTS version for consistency
FROM node:20-alpine AS build

# Set the working directory inside the container
WORKDIR /app

# Copy package.json and package-lock.json to install dependencies
COPY package*.json ./

# Install application dependencies
RUN npm install

# Copy the rest of the application source code
COPY . .

# Build the TypeScript application
RUN npm run build

# Start a new stage for the production image for smaller size
FROM node:20-alpine

# Set the working directory
WORKDIR /app

# Copy only necessary files from the build stage
COPY --from=build /app/package*.json ./
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist

# Expose the port the gateway will listen on
EXPOSE 3000

# Command to run the application
CMD ["node", "dist/gateway/server.js"]

Explanation: This Dockerfile is very similar to the backend’s, but it exposes port 3000 for the gateway.

3. Backend Service Implementation (src/backend/server.ts)

Our backend service will have one endpoint to create items and another to fetch them. It uses PostgreSQL for storage and Redis for caching.

// src/backend/server.ts
import express from 'express';
import bodyParser from 'body-parser';
import { Pool } from 'pg';
import { createClient, RedisClientType } from 'redis';

const app = express();
const port = process.env.PORT || 3001;

// --- PostgreSQL Setup ---
const pgPool = new Pool({
  user: process.env.PG_USER || 'user',
  host: process.env.PG_HOST || 'localhost',
  database: process.env.PG_DATABASE || 'testdb',
  password: process.env.PG_PASSWORD || 'password',
  port: parseInt(process.env.PG_PORT || '5432'),
});

// Basic DB setup: create table if it doesn't exist
async function initDb() {
  try {
    await pgPool.query(`
      CREATE TABLE IF NOT EXISTS items (
        id SERIAL PRIMARY KEY,
        name VARCHAR(255) NOT NULL,
        description TEXT
      );
    `);
    console.log('PostgreSQL table "items" ensured.');
  } catch (error) {
    console.error('Error ensuring PostgreSQL table:', error);
    process.exit(1); // Exit if DB setup fails
  }
}

// --- Redis Setup ---
let redisClient: RedisClientType;

async function initRedis() {
  try {
    redisClient = createClient({
      url: process.env.REDIS_URL || 'redis://localhost:6379'
    });
    redisClient.on('error', (err) => console.error('Redis Client Error', err));
    await redisClient.connect();
    console.log('Redis client connected.');
  } catch (error) {
    console.error('Error connecting to Redis:', error);
    process.exit(1);
  }
}

// --- Express Middleware ---
app.use(bodyParser.json());

// --- API Endpoints ---

// Create an item
app.post('/items', async (req, res) => {
  const { name, description } = req.body;
  if (!name) {
    return res.status(400).send('Name is required.');
  }

  try {
    const result = await pgPool.query(
      'INSERT INTO items (name, description) VALUES ($1, $2) RETURNING *',
      [name, description]
    );
    // Invalidate cache for all items
    await redisClient.del('all_items_cache');
    res.status(201).json(result.rows[0]);
  } catch (error) {
    console.error('Error creating item:', error);
    res.status(500).send('Internal server error.');
  }
});

// Get all items
app.get('/items', async (req, res) => {
  const cacheKey = 'all_items_cache';
  try {
    // Try to fetch from cache
    const cachedItems = await redisClient.get(cacheKey);
    if (cachedItems) {
      console.log('Fetching items from Redis cache.');
      return res.status(200).json(JSON.parse(cachedItems));
    }

    // If not in cache, fetch from DB
    console.log('Fetching items from PostgreSQL.');
    const result = await pgPool.query('SELECT * FROM items');
    const items = result.rows;

    // Store in cache
    await redisClient.set(cacheKey, JSON.stringify(items), { EX: 60 }); // Cache for 60 seconds
    res.status(200).json(items);
  } catch (error) {
    console.error('Error fetching items:', error);
    res.status(500).send('Internal server error.');
  }
});

// Health check endpoint
app.get('/health', (req, res) => {
  res.status(200).send('Backend Service is healthy');
});


// Start the server
async function startServer() {
  await initDb();
  await initRedis();
  app.listen(port, () => {
    console.log(`Backend Service listening on port ${port}`);
  });
}

if (process.env.NODE_ENV !== 'test') { // Only start server if not in test env
  startServer();
}

// Export app and relevant clients for testing
export { app, pgPool, redisClient };

Explanation:

  • We use environment variables (PG_HOST, REDIS_URL, etc.) to configure database and Redis connections. This is crucial for Testcontainers, as it will provide these dynamic values.
  • initDb() ensures our items table exists.
  • initRedis() connects to Redis. We export app, pgPool, and redisClient to potentially allow internal testing, though our focus here is integration testing via HTTP.
  • The /items endpoints handle creation (POST) and retrieval (GET). The GET endpoint implements a simple read-through cache using Redis.
  • A /health endpoint is added for service readiness checks.
  • The startServer() call is guarded by process.env.NODE_ENV !== 'test' to prevent the server from starting automatically when Jest imports it during testing.

4. API Gateway Implementation (src/gateway/server.ts)

Our gateway will proxy requests to the backend service.

// src/gateway/server.ts
import express from 'express';
import bodyParser from 'body-parser';
import axios from 'axios';

const app = express();
const port = process.env.PORT || 3000;

const BACKEND_SERVICE_URL = process.env.BACKEND_SERVICE_URL || 'http://localhost:3001';

// --- Express Middleware ---
app.use(bodyParser.json());

// --- Proxy Endpoints ---

// Proxy POST /items to backend
app.post('/items', async (req, res) => {
  try {
    const response = await axios.post(`${BACKEND_SERVICE_URL}/items`, req.body);
    res.status(response.status).json(response.data);
  } catch (error: any) {
    console.error('Error proxying POST /items:', error.message);
    res.status(error.response?.status || 500).send(error.message);
  }
});

// Proxy GET /items to backend
app.get('/items', async (req, res) => {
  try {
    const response = await axios.get(`${BACKEND_SERVICE_URL}/items`);
    res.status(response.status).json(response.data);
  } catch (error: any) {
    console.error('Error proxying GET /items:', error.message);
    res.status(error.response?.status || 500).send(error.message);
  }
});

// Health check endpoint
app.get('/health', (req, res) => {
  res.status(200).send('Gateway Service is healthy');
});

// Start the server
app.listen(port, () => {
  console.log(`API Gateway listening on port ${port}, proxying to ${BACKEND_SERVICE_URL}`);
});

// Export app for testing (if needed for internal unit tests)
export { app };

Explanation:

  • The gateway also uses an environment variable (BACKEND_SERVICE_URL) to find the backend. Testcontainers will provide this.
  • It uses axios to make HTTP requests to the backend.
  • It has /health endpoint for readiness checks.

5. Writing Integration Tests (tests/integration.test.ts)

Now for the main event! We’ll use Testcontainers to spin up all our services and then run tests against them.

Create a new directory tests and inside it, integration.test.ts.

// tests/integration.test.ts
import { GenericContainer, StartedTestContainer, Network, Wait, type DockerComposeEnvironment } from 'testcontainers';
import { PostgreSqlContainer } from '@testcontainers/postgresql';
import { RedisContainer } from '@testcontainers/redis';
import axios from 'axios';
import path from 'path';

// Define our container instances
let postgres: PostgreSqlContainer;
let redis: RedisContainer;
let backendContainer: StartedTestContainer;
let gatewayContainer: StartedTestContainer;

// HTTP client for our tests
let httpClient: typeof axios;

// IMPORTANT: Define a custom network for inter-container communication
// This allows containers to resolve each other by their configured network aliases.
let testNetwork: Network;

// Path to our Dockerfiles
const backendDockerfile = path.resolve(__dirname, '../src/backend/Dockerfile');
const gatewayDockerfile = path.resolve(__dirname, '../src/gateway/Dockerfile');

// Define network aliases for our services
const POSTGRES_ALIAS = 'postgres_db';
const REDIS_ALIAS = 'redis_cache';
const BACKEND_ALIAS = 'backend_service';

// --- Test Setup and Teardown ---
beforeAll(async () => {
  console.log('--- Starting Testcontainers setup ---');

  // 1. Create a custom Docker network
  testNetwork = await new Network().start();
  console.log(`Docker network '${testNetwork.getName()}' created.`);

  // 2. Start PostgreSQL container
  postgres = await new PostgreSqlContainer('postgres:16-alpine')
    .withNetwork(testNetwork)
    .withNetworkAliases(POSTGRES_ALIAS)
    .withDatabase('testdb')
    .withUsername('user')
    .withPassword('password')
    .withExposedPorts(5432) // Not strictly needed as we use network alias, but good for local debugging
    .start();
  console.log(`PostgreSQL container started on port ${postgres.getFirstMappedPort()}`);


  // 3. Start Redis container
  redis = await new RedisContainer('redis:7-alpine')
    .withNetwork(testNetwork)
    .withNetworkAliases(REDIS_ALIAS)
    .withExposedPorts(6379) // Not strictly needed as we use network alias
    .start();
  console.log(`Redis container started on port ${redis.getFirstMappedPort()}`);

  // 4. Start Backend Service container
  // We build the image from our Dockerfile
  backendContainer = await new GenericContainer(
      await GenericContainer.fromDockerfile(backendDockerfile).build()
    )
    .withNetwork(testNetwork)
    .withNetworkAliases(BACKEND_ALIAS)
    .withExposedPorts(3001) // Expose the backend's internal port to the host
    .withEnvironment({
      NODE_ENV: 'production', // Ensure our app starts
      PORT: '3001',
      PG_HOST: POSTGRES_ALIAS, // Use network alias for DB connection
      PG_PORT: '5432',
      PG_DATABASE: 'testdb',
      PG_USER: 'user',
      PG_PASSWORD: 'password',
      REDIS_URL: `redis://${REDIS_ALIAS}:6379`, // Use network alias for Redis connection
    })
    .withWaitStrategy(Wait.forHttp('/health', 3001)) // Wait until health check passes
    .start();
  console.log(`Backend service container started on port ${backendContainer.getFirstMappedPort()}`);


  // 5. Start API Gateway container
  gatewayContainer = await new GenericContainer(
      await GenericContainer.fromDockerfile(gatewayDockerfile).build()
    )
    .withNetwork(testNetwork)
    .withExposedPorts(3000) // Expose the gateway's internal port to the host
    .withEnvironment({
      NODE_ENV: 'production', // Ensure our app starts
      PORT: '3000',
      BACKEND_SERVICE_URL: `http://${BACKEND_ALIAS}:3001`, // Gateway connects to backend via network alias
    })
    .withWaitStrategy(Wait.forHttp('/health', 3000)) // Wait until health check passes
    .start();
  console.log(`API Gateway container started on port ${gatewayContainer.getFirstMappedPort()}`);

  // Configure axios to talk to our gateway
  httpClient = axios.create({
    baseURL: `http://localhost:${gatewayContainer.getFirstMappedPort()}`,
    validateStatus: (status) => true, // Don't throw errors for non-2xx responses
  });

  console.log('--- Testcontainers setup complete ---');
}, 120000); // Increased timeout for setup (image building/pulling)

afterAll(async () => {
  console.log('--- Tearing down Testcontainers ---');
  await gatewayContainer.stop();
  await backendContainer.stop();
  await redis.stop();
  await postgres.stop();
  await testNetwork.stop();
  console.log('--- Testcontainers torn down ---');
});

// --- Integration Tests ---
describe('API Gateway and Backend Integration', () => {

  // Test 1: Creating an item through the gateway and fetching it
  test('should create and retrieve an item via the API Gateway', async () => {
    const itemName = 'Test Item 1';
    const itemDescription = 'Description for test item 1.';

    // 1. Create an item via API Gateway
    const createResponse = await httpClient.post('/items', { name: itemName, description: itemDescription });
    expect(createResponse.status).toBe(201);
    expect(createResponse.data).toMatchObject({ name: itemName, description: itemDescription });

    // 2. Retrieve all items via API Gateway
    const getResponse = await httpClient.get('/items');
    expect(getResponse.status).toBe(200);
    expect(getResponse.data).toBeInstanceOf(Array);
    expect(getResponse.data).toHaveLength(1);
    expect(getResponse.data[0]).toMatchObject({ name: itemName, description: itemDescription });
  });

  // Test 2: Verify caching behavior
  test('should utilize Redis cache for subsequent reads', async () => {
    // Add another item to ensure cache invalidation on write
    await httpClient.post('/items', { name: 'Test Item 2' });

    // Clear backend's console logs to observe caching behavior
    // (In a real scenario, you'd use a more robust logging system or direct Redis verification)
    // For this example, we'll rely on our understanding of the backend's logic.

    // First read (should hit DB, then cache)
    await httpClient.get('/items');

    // Second read (should hit cache)
    const getResponseFromCache = await httpClient.get('/items');
    expect(getResponseFromCache.status).toBe(200);
    expect(getResponseFromCache.data).toBeInstanceOf(Array);
    expect(getResponseFromCache.data).toHaveLength(2); // Both items should be present

    // Note: To truly "prove" cache usage, you would either inspect backend logs
    //       or directly query the Redis container. For this high-level integration,
    //       we trust our backend logic, which clears cache on write and reads from it.
  });

  // Test 3: Error handling for invalid requests
  test('should return 400 for invalid item creation request', async () => {
    const invalidCreateResponse = await httpClient.post('/items', { description: 'No name provided' });
    expect(invalidCreateResponse.status).toBe(400);
    expect(invalidCreateResponse.data).toBe('Name is required.');
  });
});

Explanation (this is a big one, so let’s break it down):

  1. Imports: We bring in GenericContainer for our custom Node.js services, PostgreSqlContainer and RedisContainer for our databases, Network to manage inter-container communication, and Wait strategies. axios is our HTTP client.
  2. Container Variables: We declare variables to hold our StartedTestContainer instances.
  3. testNetwork: This is critical. By creating a custom Network (new Network().start()), all containers attached to this network can communicate with each other using their networkAliases as hostnames. This means backend_service can connect to postgres_db at postgres_db:5432 without knowing its dynamically assigned host port.
  4. beforeAll Hook:
    • This is where all our Testcontainers magic happens once before all tests run.
    • Network Creation: testNetwork = await new Network().start(); creates the shared network.
    • PostgreSQL: new PostgreSqlContainer() starts a PostgreSQL instance.
      • .withNetwork(testNetwork): Attaches it to our shared network.
      • .withNetworkAliases(POSTGRES_ALIAS): Gives it the hostname postgres_db within that network.
      • withDatabase, withUsername, withPassword: Standard database configuration.
    • Redis: new RedisContainer() starts a Redis instance, similarly configured with the network and an alias redis_cache.
    • Backend Service: This is where GenericContainer shines.
      • await GenericContainer.fromDockerfile(backendDockerfile).build(): This tells Testcontainers to find our Dockerfile at the specified path, build it into an image, and use that image. This is a powerful feature for testing custom applications!
      • .withExposedPorts(3001): Exposes the backend’s internal port 3001 to a random port on the host machine.
      • .withEnvironment({...}): This is how we configure our backend application to connect to the other containers. Notice PG_HOST: POSTGRES_ALIAS and REDIS_URL: 'redis://${REDIS_ALIAS}:6379'. Our backend service, running inside its container, uses these network aliases to connect to PostgreSQL and Redis within the same Docker network.
      • .withWaitStrategy(Wait.forHttp('/health', 3001)): We wait for the backend’s /health endpoint to return a 200 OK before considering the container ready. This is crucial for reliable tests.
    • API Gateway: Again, GenericContainer builds an image from our gateway Dockerfile.
      • .withEnvironment({ BACKEND_SERVICE_URL: \http://${BACKEND_ALIAS}:3001` })`: The gateway connects to the backend using its network alias.
      • .withWaitStrategy(Wait.forHttp('/health', 3000)): Waits for the gateway’s health check.
    • httpClient Setup: Once the gateway is up, we create an axios instance configured to send requests to its dynamically assigned host port.
  5. afterAll Hook: This ensures all containers and the network are properly stopped and cleaned up after all tests have finished, preventing resource leaks.
  6. Integration Tests:
    • Test 1 (Create and Retrieve): We send a POST request to the gateway, then a GET request, asserting on the HTTP status codes and the data returned. This verifies the full path: Gateway -> Backend -> PostgreSQL -> Backend -> Gateway.
    • Test 2 (Caching): We add another item (which invalidates the cache), then perform two GET requests. The goal is to demonstrate that the backend is configured to use Redis. While we don’t directly inspect Redis here, the setup allows for it.
    • Test 3 (Error Handling): We test an invalid request to ensure proper error responses are returned, verifying that the validation in the backend is working and propagated through the gateway.

This entire setup provides a robust, isolated, and realistic testing environment for your multi-service application!

Running the Tests

To run these tests, simply execute:

npm test

You’ll see Docker pulling images (if not cached), building your custom service images, starting containers, and then executing your tests. If everything is set up correctly, all tests should pass!

Mini-Challenge: Add a Specific Item Endpoint

Now it’s your turn to extend our project!

Challenge:

  1. Backend Enhancement: Add a new GET endpoint to the backend service: /items/:id. This endpoint should fetch a single item by its ID from the PostgreSQL database. It should also attempt to fetch from Redis first (using a specific item cache key, e.g., item_<id>), and then cache the result after fetching from the DB.
  2. API Gateway Enhancement: Proxy this new /items/:id endpoint from the API Gateway to the backend.
  3. Test Implementation: Write a new integration test (test('should retrieve a specific item by ID via the API Gateway', ...)) that:
    • Creates an item (or two) first.
    • Retrieves one of them using the new /items/:id endpoint via the API Gateway.
    • Asserts that the correct item is returned.
    • (Bonus) If you’re feeling adventurous, try to write a test that hints at the caching mechanism for a single item.

Hint:

  • Remember to restart or rebuild your backend and gateway containers if you modify their code or Dockerfiles. Testcontainers will rebuild the image if the Dockerfile changes.
  • For the Redis cache key, consider item_${id}.

What to observe/learn: This challenge reinforces how to extend services and how to ensure your Testcontainers setup can handle new endpoints and database/cache interactions without additional complex setup. You’ll gain confidence in modifying your application stack and rapidly testing changes.

Common Pitfalls & Troubleshooting

Even with Testcontainers’ help, multi-service testing can have its quirks. Here are a few common issues and how to tackle them:

  1. Container Networking Issues (connection refused):

    • Symptom: Your custom service container cannot connect to PostgreSQL, Redis, or the backend service. You’ll see connection refused or EHOSTUNREACH errors in your container logs.
    • Cause: Incorrect PG_HOST, REDIS_URL, or BACKEND_SERVICE_URL environment variables, or missing withNetworkAliases.
    • Fix:
      • Verify withNetwork(testNetwork) is used for ALL containers meant to communicate.
      • Ensure withNetworkAliases() is set correctly for all services (e.g., POSTGRES_ALIAS, REDIS_ALIAS, BACKEND_ALIAS).
      • Double-check that the environment variables passed to your custom services use these network aliases (e.g., PG_HOST: 'postgres_db'). Remember these aliases are hostnames within the Docker network.
      • Ensure correct ports are specified (e.g., 5432 for PostgreSQL, 6379 for Redis).
  2. Container Startup Order / Readiness Issues:

    • Symptom: Tests fail because a service isn’t fully ready even though its container has started. For example, the backend tries to connect to PostgreSQL before PostgreSQL is ready to accept connections.
    • Cause: Insufficient or incorrect withWaitStrategy.
    • Fix:
      • Always use appropriate withWaitStrategy for your containers. For databases, Wait.forLogMessage() or the default DB-specific waits are usually good. For custom HTTP services, Wait.forHttp('/health', <port>) is excellent.
      • Ensure the health check endpoint (/health in our example) in your custom service actually indicates readiness (e.g., checks DB connection, Redis connection).
      • If services have deep dependencies (e.g., Gateway waits for Backend, Backend waits for DB), ensure the beforeAll block starts containers in the correct order or that WaitStrategy is robust enough.
  3. Slow Test Runs:

    • Symptom: Your test suite takes a very long time to complete, especially the beforeAll phase.
    • Cause: Image pulling, redundant image builds, or excessive container restarts.
    • Fix:
      • Image Caching: Testcontainers caches downloaded Docker images. Ensure you’re using specific image tags (e.g., postgres:16-alpine) instead of latest which can lead to frequent re-pulls.
      • Custom Image Builds: GenericContainer.fromDockerfile(...).build() is powerful, but it builds an image every time the Dockerfile or context changes. During development, if you only change src/backend/server.ts and not the Dockerfile, Testcontainers will rebuild if you don’t use caching effectively. Consider using withBuildArgs or withCacheFrom if you have complex builds. For our simple case, it’s usually fast enough.
      • Test Scope: beforeAll runs once for all tests in a file. If you move start() and stop() calls to beforeEach and afterEach, containers will restart for every test, significantly slowing things down. Use beforeAll/afterAll for shared setup.
      • Resource Limits: Ensure your Docker daemon has enough CPU and memory allocated.

Summary

Phew! You’ve just built and tested a multi-service application stack using Node.js, TypeScript, PostgreSQL, Redis, an API Gateway, and a backend service, all orchestrated by Testcontainers. This is a significant milestone!

Here are the key takeaways from this chapter:

  • Real-world Integration Testing: Testcontainers excels at providing realistic, isolated environments for complex service interactions, far beyond simple unit tests or mocks.
  • Custom Service Containerization: You learned how to define Dockerfiles for your Node.js applications and use GenericContainer.fromDockerfile().build() to run them in Testcontainers.
  • Inter-Container Communication: Custom Docker Networks and networkAliases are crucial for allowing services to discover and communicate with each other using stable hostnames.
  • Robust Setup and Teardown: The beforeAll hook is your friend for setting up the entire stack once, and afterAll ensures a clean slate.
  • Readiness Probes: withWaitStrategy is essential to ensure your services are fully operational before tests begin, preventing flaky results.
  • Environment Variables: Using environment variables to configure your applications makes them flexible enough to run both locally and within Testcontainers.

By mastering this project, you’ve gained invaluable skills in setting up robust, end-to-end integration tests for modern microservices architectures. This approach ensures high confidence in your application’s behavior before it even reaches production.

What’s next? In the following chapters, we might explore even more advanced scenarios, such as integrating Testcontainers into CI/CD pipelines, optimizing performance for large test suites, or testing event-driven architectures. The foundation you’ve built here will serve you well for any future testing challenge!


References

This page is AI-assisted and reviewed. It references official documentation and recognized resources where relevant.