Welcome to Chapter 4 of our Node.js backend project series! So far, we’ve established a robust project structure, set up our Fastify server, and implemented essential middleware for request handling and error management. While our API can process requests, it currently lacks the ability to store and retrieve data persistently. This severely limits its utility, as any information processed is lost once the server restarts.

In this chapter, we will bridge this gap by integrating a powerful relational database: PostgreSQL. PostgreSQL is an open-source object-relational database system known for its reliability, feature robustness, and performance. To interact with PostgreSQL efficiently from our Node.js application, we will use TypeORM, a modern Object-Relational Mapper (ORM). TypeORM provides a clean, object-oriented way to work with databases, offering features like entity management, query builders, and crucially, database migrations. Migrations allow us to evolve our database schema in a controlled, versioned manner, which is essential for collaborative development and seamless deployments.

By the end of this chapter, you will have a fully functional PostgreSQL database running locally via Docker, connected to our Node.js application through TypeORM. We will define our first data model (a User entity), set up a robust migration system, and implement a basic API endpoint to demonstrate data persistence. This foundation is critical for building any real-world application, enabling us to store user data, application state, and more.

Planning & Design

Before diving into the code, let’s outline the architecture and design decisions for integrating PostgreSQL.

Component Architecture

Our application will interact with the PostgreSQL database through TypeORM. The Fastify application will initiate the TypeORM data source, which then manages the connection pool and all database operations.

flowchart TD Client[Client Application] -->|HTTP Request| FastifyApp[Fastify Application] FastifyApp -->|TypeORM API Calls| TypeORM[TypeORM ORM] TypeORM -->|SQL Queries| PostgreSQLDB[(PostgreSQL Database)]

Database Schema (Initial)

We’ll start with a simple User entity to demonstrate data persistence. This entity will represent a user in our system.

User Table Schema:

  • id: Primary key, auto-generated UUID.
  • username: Unique string, not null.
  • email: Unique string, not null.
  • passwordHash: String, not null (we’ll store hashed passwords for security).
  • createdAt: Timestamp, auto-generated on creation.
  • updatedAt: Timestamp, auto-updated on modification.

File Structure

We’ll introduce new directories and files to manage our database integration:

.
├── src/
│   ├── config/
│   │   └── database.ts             # TypeORM data source configuration
│   ├── database/
│   │   ├── entities/
│   │   │   └── User.ts             # TypeORM User entity definition
│   │   └── migrations/             # Directory for database migration files
│   ├── modules/
│   │   └── user/
│   │       ├── user.controller.ts  # User-related API logic
│   │       ├── user.routes.ts      # User-related API routes
│   │       └── user.service.ts     # User-related business logic
│   └── app.ts                      # Main application entry point (updated for DB connection)
├── docker-compose.yml              # Docker Compose configuration for local services (PostgreSQL)
├── ormconfig.ts                    # TypeORM CLI configuration
└── .env                            # Environment variables (including database credentials)

Step-by-Step Implementation

Let’s start building our data persistence layer.

3.1. Setup Local PostgreSQL with Docker Compose

To ensure a consistent development environment and avoid local installation complexities, we’ll run PostgreSQL in a Docker container.

a) Setup/Configuration

Create a docker-compose.yml file at the root of your project:

# docker-compose.yml
version: '3.8'

services:
  db:
    image: postgres:16-alpine
    container_name: myapp-postgres
    restart: always
    environment:
      POSTGRES_DB: ${DB_NAME}
      POSTGRES_USER: ${DB_USER}
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    ports:
      - "5432:5432"
    volumes:
      - db_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"]
      interval: 5s
      timeout: 5s
      retries: 5

volumes:
  db_data:

Explanation:

  • version: '3.8': Specifies the Docker Compose file format version.
  • services: Defines the containers.
  • db: Our PostgreSQL service.
  • image: postgres:16-alpine: Uses the official PostgreSQL image, version 16, with the lightweight Alpine Linux base.
  • container_name: myapp-postgres: Assigns a readable name to the container.
  • restart: always: Ensures the container restarts automatically if it stops.
  • environment: Sets environment variables inside the container. We’re using placeholders (${...}) which Docker Compose will resolve from your .env file. These are crucial for configuring the PostgreSQL database (name, user, password).
  • ports: - "5432:5432": Maps port 5432 (default PostgreSQL port) from the container to port 5432 on your host machine, allowing your Node.js app to connect.
  • volumes: - db_data:/var/lib/postgresql/data: Persists the database data to a named Docker volume (db_data). This ensures your data isn’t lost if the container is removed or recreated.
  • healthcheck: Defines how Docker should check if the database is ready. It runs pg_isready inside the container, which is a PostgreSQL utility to check connection status.

b) Core Implementation

Now, let’s start the PostgreSQL container. Open your terminal in the project root and run:

docker compose up -d

Explanation:

  • docker compose up: Starts the services defined in docker-compose.yml.
  • -d: Runs the containers in detached mode (in the background).

c) Testing This Component

To verify that your PostgreSQL container is running:

docker ps

You should see an entry similar to this:

CONTAINER ID   IMAGE              COMMAND                  CREATED         STATUS                    PORTS                    NAMES
<container-id> postgres:16-alpine "docker-entrypoint.s…" About a minute ago Up About a minute (healthy) 0.0.0.0:5432->5432/tcp myapp-postgres

The STATUS should indicate (healthy) after a few seconds, meaning PostgreSQL is ready to accept connections.

3.2. Install Database Dependencies

We need to install the pg driver for PostgreSQL and typeorm for our ORM. We’ll also use dotenv to manage environment variables.

a) Setup/Configuration

Run the following command in your terminal:

npm install pg typeorm dotenv
npm install --save-dev @types/pg

Explanation:

  • pg: The official Node.js client for PostgreSQL. TypeORM uses this under the hood.
  • typeorm: The main TypeORM library.
  • dotenv: A zero-dependency module that loads environment variables from a .env file into process.env.
  • @types/pg: TypeScript type definitions for the pg library, essential for type safety.

3.3. Configure Environment Variables

We’ll store our database credentials and connection details in a .env file. This keeps sensitive information out of our codebase and allows easy configuration changes between environments (development, testing, production).

a) Setup/Configuration

Create or update your .env file at the root of your project:

# .env

# ... other existing environment variables ...

# Database Configuration (PostgreSQL)
DB_HOST=localhost
DB_PORT=5432
DB_USER=myuser
DB_PASSWORD=mypassword
DB_NAME=mydatabase

Explanation:

  • DB_HOST: The hostname of your database server. For Docker, it’s localhost when connecting from the host machine.
  • DB_PORT: The port PostgreSQL is listening on.
  • DB_USER: The username for connecting to the database.
  • DB_PASSWORD: The password for the database user.
  • DB_NAME: The name of the database to connect to.

Security Consideration: In production, these values should never be hardcoded or stored directly in a .env file checked into version control. We’ll cover secure secrets management in later chapters (e.g., AWS Secrets Manager, Vault). For local development, .env is appropriate.

3.4. Create TypeORM Data Source Configuration

This file will define how TypeORM connects to our PostgreSQL database.

a) Setup/Configuration

Create a new file src/config/database.ts:

// src/config/database.ts
import 'dotenv/config'; // Load environment variables from .env
import { DataSource } from 'typeorm';
import { User } from '../database/entities/User'; // Will be created next

// Ensure all required environment variables are present
const { DB_HOST, DB_PORT, DB_USER, DB_PASSWORD, DB_NAME } = process.env;

if (!DB_HOST || !DB_PORT || !DB_USER || !DB_PASSWORD || !DB_NAME) {
  console.error('Missing one or more database environment variables. Please check your .env file.');
  process.exit(1); // Exit the process if critical env vars are missing
}

export const AppDataSource = new DataSource({
  type: 'postgres',
  host: DB_HOST,
  port: parseInt(DB_PORT, 10), // Ensure port is a number
  username: DB_USER,
  password: DB_PASSWORD,
  database: DB_NAME,
  synchronize: false, // NEVER use synchronize: true in production!
  logging: ['query', 'error'], // Log SQL queries and errors
  entities: [User], // List of entities (data models)
  migrations: [__dirname + '/../database/migrations/*.ts'], // Path to migration files
  subscribers: [],
});

/**
 * Initializes the database connection.
 * @returns {Promise<void>} A promise that resolves when the connection is established.
 */
export const initializeDatabase = async (): Promise<void> => {
  try {
    await AppDataSource.initialize();
    console.log('Database connection initialized successfully!');
  } catch (error) {
    console.error('Error during database initialization:', error);
    process.exit(1); // Exit if DB connection fails
  }
};

Explanation:

  • import 'dotenv/config';: This line loads environment variables from your .env file into process.env. It must be at the top.
  • DataSource: The core TypeORM class for managing database connections.
  • Environment Variable Validation: We add a check to ensure all necessary DB environment variables are present. If not, the application logs an error and exits, preventing runtime issues.
  • type: 'postgres': Specifies PostgreSQL as the database type.
  • host, port, username, password, database: These are populated directly from our process.env variables. parseInt(DB_PORT, 10) converts the string port to a number.
  • synchronize: false: CRITICAL FOR PRODUCTION. When true, TypeORM automatically creates database tables and columns based on your entities. This is convenient for rapid development but highly dangerous in production as it can lead to data loss or unexpected schema changes. We will use migrations for schema management.
  • logging: ['query', 'error']: Configures TypeORM to log executed SQL queries and any errors, which is invaluable for debugging.
  • entities: [User]: An array specifying all TypeORM entities (models) for our application. We’ll create User next.
  • migrations: [...]: Defines the path where TypeORM should look for migration files.
  • initializeDatabase: An async function that attempts to establish the database connection. It includes robust error handling, logging success or failure, and exiting the process on failure to prevent the application from running without a database.

3.5. Define the User Entity

Now, let’s create our first TypeORM entity, representing the User table.

a) Setup/Configuration

Create a new directory src/database/entities/ and inside it, create User.ts:

// src/database/entities/User.ts
import { Entity, PrimaryGeneratedColumn, Column, CreateDateColumn, UpdateDateColumn } from 'typeorm';

@Entity('users') // Specifies that this class is a TypeORM entity and maps it to the 'users' table
export class User {
  @PrimaryGeneratedColumn('uuid') // Auto-generates a UUID for the primary key
  id!: string; // '!' asserts that id will be initialized by TypeORM

  @Column({ type: 'varchar', length: 255, unique: true, nullable: false })
  username!: string;

  @Column({ type: 'varchar', length: 255, unique: true, nullable: false })
  email!: string;

  @Column({ type: 'varchar', length: 255, nullable: false, select: false }) // select: false prevents it from being returned by default
  passwordHash!: string;

  @CreateDateColumn({ type: 'timestamp with time zone', default: () => 'CURRENT_TIMESTAMP' })
  createdAt!: Date;

  @UpdateDateColumn({ type: 'timestamp with time zone', default: () => 'CURRENT_TIMESTAMP', onUpdate: 'CURRENT_TIMESTAMP' })
  updatedAt!: Date;
}

Explanation:

  • @Entity('users'): Decorator marking User as an entity and specifying its table name as users.
  • @PrimaryGeneratedColumn('uuid'): Defines id as the primary key, automatically generating a UUID value for new records.
  • @Column(...): Decorator for defining regular columns.
    • type: Specifies the database column type (e.g., varchar, timestamp with time zone).
    • length: For varchar, specifies maximum length.
    • unique: true: Enforces uniqueness for username and email.
    • nullable: false: Makes the column required.
    • select: false: For passwordHash, this is a security best practice. It means that by default, when you query for a User, the passwordHash will not be included in the results. You’d have to explicitly select it if needed.
  • @CreateDateColumn: Automatically sets the createdAt timestamp when a new record is created.
  • @UpdateDateColumn: Automatically updates the updatedAt timestamp whenever a record is modified. onUpdate: 'CURRENT_TIMESTAMP' ensures this behavior at the database level.
  • !: The non-null assertion operator is used here because TypeORM initializes these properties, even though TypeScript might think they are not initialized in the constructor.

3.6. Configure TypeORM CLI for Migrations

TypeORM provides a powerful CLI tool for generating and running migrations. We need to tell the CLI where to find our DataSource configuration and our entities/migrations.

a) Setup/Configuration

Create a file ormconfig.ts at the root of your project:

// ormconfig.ts
import 'dotenv/config'; // Load environment variables
import { DataSource } from 'typeorm';
import { User } from './src/database/entities/User';

const { DB_HOST, DB_PORT, DB_USER, DB_PASSWORD, DB_NAME } = process.env;

if (!DB_HOST || !DB_PORT || !DB_USER || !DB_PASSWORD || !DB_NAME) {
  console.error('Missing one or more database environment variables. Please check your .env file.');
  process.exit(1);
}

const ormConfig = new DataSource({
  type: 'postgres',
  host: DB_HOST,
  port: parseInt(DB_PORT, 10),
  username: DB_USER,
  password: DB_PASSWORD,
  database: DB_NAME,
  synchronize: false,
  logging: ['query', 'error'],
  entities: [User],
  migrations: [__dirname + '/src/database/migrations/*.ts'],
  subscribers: [],
});

export default ormConfig;

Explanation:

  • This ormconfig.ts is very similar to src/config/database.ts because the TypeORM CLI needs its own DataSource instance to perform operations like generating or running migrations.
  • It explicitly points to our User entity and the migrations directory.

Next, add scripts to your package.json to easily run TypeORM CLI commands.

b) Core Implementation

Open your package.json and add the following scripts under the scripts section:

// package.json
{
  "name": "my-fastify-app",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "start": "node dist/app.js",
    "dev": "ts-node-dev --respawn --transpile-only src/app.ts",
    "build": "tsc",
    "lint": "eslint . --ext .ts",
    "lint:fix": "eslint . --ext .ts --fix",
    "prettier": "prettier --check \"src/**/*.ts\"",
    "prettier:fix": "prettier --write \"src/**/*.ts\"",
    "test": "jest",
    "test:watch": "jest --watch",
    "typeorm": "typeorm-ts-node-commonjs -d ./ormconfig.ts",
    "typeorm:migration:generate": "npm run typeorm migration:generate",
    "typeorm:migration:create": "npm run typeorm migration:create",
    "typeorm:migration:run": "npm run typeorm migration:run",
    "typeorm:migration:revert": "npm run typeorm migration:revert"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "bcrypt": "^5.1.1",
    "dotenv": "^16.4.5",
    "fastify": "^4.26.1",
    "fastify-plugin": "^4.5.1",
    "fastify-swagger": "^5.2.0",
    "fastify-swagger-ui": "^4.0.0",
    "helmet": "^7.1.0",
    "http-status-codes": "^2.3.0",
    "pino": "^8.19.0",
    "pg": "^8.11.3",
    "typeorm": "^0.3.20",
    "zod": "^3.22.4"
  },
  "devDependencies": {
    "@types/bcrypt": "^5.0.2",
    "@types/jest": "^29.5.12",
    "@types/node": "^20.11.20",
    "@types/pg": "^8.11.0",
    "@types/supertest": "^6.0.2",
    "@typescript-eslint/eslint-plugin": "^7.0.2",
    "@typescript-eslint/parser": "^7.0.2",
    "eslint": "^8.57.0",
    "jest": "^29.7.0",
    "prettier": "^3.2.5",
    "supertest": "^6.3.4",
    "ts-jest": "^29.1.2",
    "ts-node": "^10.9.2",
    "ts-node-dev": "^2.0.0",
    "typeorm-ts-node-commonjs": "^4.0.0",
    "typescript": "^5.3.3"
  }
}

Explanation:

  • "typeorm": "typeorm-ts-node-commonjs -d ./ormconfig.ts": This is the base command for running TypeORM CLI. We use typeorm-ts-node-commonjs to execute TypeORM commands directly with TypeScript files. The -d flag specifies the path to our DataSource configuration file.
  • "typeorm:migration:generate": Generates a new migration file based on the differences between your entities and the current database schema.
  • "typeorm:migration:create": Creates an empty migration file (useful for manual schema changes).
  • "typeorm:migration:run": Executes all pending migrations.
  • "typeorm:migration:revert": Reverts the last applied migration (use with caution in production!).

3.7. Generate and Run First Migration

Now that TypeORM is configured, let’s create our users table using a migration.

a) Core Implementation

First, generate the migration:

npm run typeorm:migration:generate src/database/migrations/InitialUserTable

Explanation:

  • This command will create a new file in src/database/migrations/ with a timestamp prefix (e.g., 1704672000000-InitialUserTable.ts).
  • TypeORM automatically inspects your User entity and generates the SQL CREATE TABLE statements in the up method and DROP TABLE statements in the down method.

Open the generated migration file (e.g., src/database/migrations/1704672000000-InitialUserTable.ts). It should look similar to this:

// src/database/migrations/1704672000000-InitialUserTable.ts
import { MigrationInterface, QueryRunner, Table } from 'typeorm';

export class InitialUserTable1704672000000 implements MigrationInterface {
  name = 'InitialUserTable1704672000000'; // Unique name for the migration

  public async up(queryRunner: QueryRunner): Promise<void> {
    await queryRunner.createTable(
      new Table({
        name: 'users',
        columns: [
          {
            name: 'id',
            type: 'uuid',
            isPrimary: true,
            generationStrategy: 'uuid',
            default: 'uuid_generate_v4()', // PostgreSQL specific for UUID generation
          },
          {
            name: 'username',
            type: 'varchar',
            length: '255',
            isUnique: true,
            isNullable: false,
          },
          {
            name: 'email',
            type: 'varchar',
            length: '255',
            isUnique: true,
            isNullable: false,
          },
          {
            name: 'passwordHash',
            type: 'varchar',
            length: '255',
            isNullable: false,
          },
          {
            name: 'createdAt',
            type: 'timestamp with time zone',
            default: 'CURRENT_TIMESTAMP',
            isNullable: false,
          },
          {
            name: 'updatedAt',
            type: 'timestamp with time zone',
            default: 'CURRENT_TIMESTAMP',
            onUpdate: 'CURRENT_TIMESTAMP',
            isNullable: false,
          },
        ],
      }),
      true, // If set to `true`, the table will be created only if it does not exist.
    );

    // If you need uuid_generate_v4(), ensure the pgcrypto extension is enabled
    // You might need to add this if it's not already enabled in your PostgreSQL instance
    await queryRunner.query(`CREATE EXTENSION IF NOT EXISTS "uuid-ossp"`);
  }

  public async down(queryRunner: QueryRunner): Promise<void> {
    await queryRunner.dropTable('users');
    await queryRunner.query(`DROP EXTENSION IF EXISTS "uuid-ossp"`);
  }
}

Note on uuid_generate_v4() and uuid-ossp: TypeORM might generate uuid_generate_v4() as the default for UUID columns. This function is part of the uuid-ossp extension in PostgreSQL. If you encounter errors about uuid_generate_v4() not existing, you need to enable this extension. I’ve added await queryRunner.query('CREATE EXTENSION IF NOT EXISTS "uuid-ossp"'); to the up method to handle this automatically.

Now, run the migration to create the table in your database:

npm run typeorm:migration:run

You should see output indicating the migration ran successfully.

b) Testing This Component

To verify the tables were created:

  1. Connect to your PostgreSQL database: You can use a GUI tool like DBeaver, TablePlus, or DataGrip.

    • Host: localhost
    • Port: 5432
    • User: myuser
    • Password: mypassword
    • Database: mydatabase

    Alternatively, from your terminal:

    docker exec -it myapp-postgres psql -U myuser -d mydatabase
    
  2. List tables: Inside the psql prompt (or your GUI tool), run:

    \dt
    

    You should see public.users and public.migrations tables.

  3. Describe the users table:

    \d users
    

    Verify the columns (id, username, email, passwordHash, createdAt, updatedAt) match our User entity.

  4. Check the migrations table:

    SELECT * FROM migrations;
    

    This table tracks which migrations have been applied. You should see an entry for InitialUserTable.

3.8. Integrate Database Connection into Application

Now, let’s make sure our Fastify application connects to the database when it starts.

a) Core Implementation

Modify your main application entry point, src/app.ts, to call initializeDatabase before starting the Fastify server.

// src/app.ts
import Fastify from 'fastify';
import { AppDataSource, initializeDatabase } from './config/database'; // Import AppDataSource and initializeDatabase
import { registerPlugins } from './plugins';
import { registerRoutes } from './routes';
import { setupErrorHandling } from './utils/errorHandler';
import { logger } from './utils/logger';
import { StatusCodes } from 'http-status-codes';

const app = Fastify({
  logger: false, // Using pino for custom logging
});

const start = async () => {
  try {
    // 1. Initialize Database Connection
    await initializeDatabase();

    // 2. Register Plugins (e.g., Helmet, Swagger, etc.)
    await registerPlugins(app);

    // 3. Register Routes
    registerRoutes(app);

    // 4. Setup Global Error Handling
    setupErrorHandling(app);

    // 5. Add a basic health check endpoint for the application and database
    app.get('/health', async (request, reply) => {
      try {
        // Check database connection status
        await AppDataSource.query('SELECT 1'); // Simple query to check DB connection
        reply.status(StatusCodes.OK).send({ status: 'ok', database: 'connected' });
      } catch (dbError) {
        logger.error('Database health check failed:', dbError);
        reply.status(StatusCodes.INTERNAL_SERVER_ERROR).send({ status: 'error', database: 'disconnected' });
      }
    });

    // Start the Fastify server
    const port = parseInt(process.env.PORT || '3000', 10);
    const host = process.env.HOST || '0.0.0.0';
    await app.listen({ port, host });

    logger.info(`Server listening on ${host}:${port}`);
    logger.info(`For API documentation, visit http://${host}:${port}/documentation`);
  } catch (err) {
    logger.error('Application failed to start:', err);
    process.exit(1);
  }
};

start();

Explanation:

  • import { AppDataSource, initializeDatabase } from './config/database';: We import our database configuration.
  • await initializeDatabase();: This line is crucial. It calls our function to connect to PostgreSQL before the Fastify server starts listening for requests. This ensures our application can’t serve requests without a working database connection.
  • Health Check Endpoint (/health): We’ve added a simple GET /health endpoint. This is a best practice for monitoring applications. It checks if the Fastify server is running and also performs a simple query (SELECT 1) to ensure the database connection is active. This is invaluable for load balancers and container orchestration platforms (like Kubernetes or ECS) to determine if your application instance is healthy.
  • Error handling for database connection failure is included, causing the process to exit if the database can’t be reached, preventing a “half-dead” application.

b) Testing This Component

  1. Start your Fastify application:

    npm run dev
    

    You should see Database connection initialized successfully! in your console, followed by the server listening message.

  2. Test the health check endpoint: Open your browser or use curl:

    curl http://localhost:3000/health
    

    You should receive a response: {"status":"ok","database":"connected"}

    If you stop your PostgreSQL container (docker compose stop db) and then restart your Node.js app, the initializeDatabase() call will fail, and the application will exit with an error, demonstrating the robust error handling.

3.9. Implement a Basic User Service (CRUD - Create only for now)

Now that our database is connected, let’s create a basic API endpoint to create users. This will demonstrate how to interact with TypeORM to persist data.

a) Setup/Configuration

Create a directory src/modules/user and inside it, create user.service.ts, user.controller.ts, and user.routes.ts.

First, let’s update src/routes.ts to include user routes:

// src/routes.ts
import { FastifyInstance } from 'fastify';
import { healthRoutes } from './modules/health/health.routes'; // Example from previous chapter (if exists)
import { userRoutes } from './modules/user/user.routes'; // Import user routes

export function registerRoutes(app: FastifyInstance) {
  // Register health routes (if you have them)
  // app.register(healthRoutes, { prefix: '/api/v1/health' });

  // Register user routes
  app.register(userRoutes, { prefix: '/api/v1/users' });

  // Add more routes here as your application grows
}

Next, let’s define the UserService which will encapsulate our database operations for users.

b) Core Implementation

Create src/modules/user/user.service.ts:

// src/modules/user/user.service.ts
import { AppDataSource } from '../../config/database';
import { User } from '../../database/entities/User';
import { logger } from '../../utils/logger';
import bcrypt from 'bcrypt';

// Define an interface for the data required to create a user
interface CreateUserPayload {
  username: string;
  email: string;
  passwordPlainText: string; // Plain text password for hashing
}

export class UserService {
  private userRepository = AppDataSource.getRepository(User);
  private readonly SALT_ROUNDS = 10; // For bcrypt hashing

  /**
   * Creates a new user in the database.
   * @param payload - User data including username, email, and plain text password.
   * @returns {Promise<User>} The newly created user entity (without passwordHash).
   * @throws {Error} If a user with the given username or email already exists, or other database errors.
   */
  public async createUser(payload: CreateUserPayload): Promise<Omit<User, 'passwordHash'>> {
    const { username, email, passwordPlainText } = payload;

    try {
      // Check if username or email already exists
      const existingUser = await this.userRepository
        .createQueryBuilder('user')
        .where('user.username = :username', { username })
        .orWhere('user.email = :email', { email })
        .getOne();

      if (existingUser) {
        if (existingUser.username === username) {
          throw new Error('Username already taken.');
        }
        if (existingUser.email === email) {
          throw new Error('Email already registered.');
        }
      }

      // Hash the password securely
      const passwordHash = await bcrypt.hash(passwordPlainText, this.SALT_ROUNDS);

      // Create a new User instance
      const newUser = this.userRepository.create({
        username,
        email,
        passwordHash,
      });

      // Save the user to the database
      const savedUser = await this.userRepository.save(newUser);
      logger.info(`User created: ${savedUser.id}`);

      // Return the user without the password hash for security
      const { passwordHash: _, ...userWithoutPassword } = savedUser;
      return userWithoutPassword;
    } catch (error) {
      logger.error(`Error creating user: ${error.message}`, { username, email });
      throw error; // Re-throw to be caught by the controller/error handler
    }
  }

  // Other user-related methods (e.g., findById, findByEmail, update, delete) will go here in future chapters
}

Explanation:

  • userRepository = AppDataSource.getRepository(User);: This gets the repository for the User entity, which provides methods for interacting with the users table.
  • createUser: An asynchronous method to handle user creation.
  • Duplicate Check: Before creating a user, it queries the database to ensure the username or email isn’t already in use. This prevents unique constraint violations at the database level and provides a more user-friendly error message.
  • bcrypt.hash(...): We use bcrypt to securely hash the user’s plain-text password. Never store plain-text passwords in the database. SALT_ROUNDS determines the computational cost of hashing; 10 is a good default for most applications. You’ll need to install bcrypt: npm install bcrypt && npm install --save-dev @types/bcrypt.
  • this.userRepository.create(...): Creates a new User instance, but doesn’t save it to the database yet.
  • this.userRepository.save(newUser): Persists the newUser instance to the database. If the entity has an id, it performs an update; otherwise, it performs an insert.
  • Omit<User, 'passwordHash'>: This TypeScript utility type ensures that the returned User object explicitly excludes the passwordHash property, adhering to security best practices.
  • Error Handling & Logging: Includes try-catch blocks to log database errors and re-throw them, allowing our centralized error handler to catch them.

Next, create the UserController to handle incoming HTTP requests and delegate to the UserService.

// src/modules/user/user.controller.ts
import { FastifyReply, FastifyRequest } from 'fastify';
import { StatusCodes } from 'http-status-codes';
import { UserService } from './user.service';
import { logger } from '../../utils/logger';
import { z } from 'zod'; // For input validation

// Define the schema for user creation request body
const createUserSchema = z.object({
  username: z.string().min(3).max(50),
  email: z.string().email(),
  password: z.string().min(8).max(100), // Minimum 8 characters for password
});

export class UserController {
  private userService: UserService;

  constructor() {
    this.userService = new UserService();
  }

  /**
   * Handles the creation of a new user.
   * @param request - Fastify request object.
   * @param reply - Fastify reply object.
   */
  public async createUser(request: FastifyRequest, reply: FastifyReply): Promise<void> {
    try {
      // Validate request body using Zod
      const userData = createUserSchema.parse(request.body);

      // Call the service to create the user
      const newUser = await this.userService.createUser({
        username: userData.username,
        email: userData.email,
        passwordPlainText: userData.password,
      });

      // Respond with the created user (excluding password hash)
      reply.status(StatusCodes.CREATED).send({
        message: 'User created successfully',
        user: newUser,
      });
    } catch (error: any) {
      logger.error('Error in UserController.createUser:', error);
      // Centralized error handler will catch and format this
      // For Zod validation errors, we can enrich the error object
      if (error instanceof z.ZodError) {
        reply.status(StatusCodes.BAD_REQUEST).send({
          statusCode: StatusCodes.BAD_REQUEST,
          error: 'Bad Request',
          message: 'Validation failed',
          details: error.errors,
        });
      } else if (error.message.includes('Username already taken') || error.message.includes('Email already registered')) {
        reply.status(StatusCodes.CONFLICT).send({
          statusCode: StatusCodes.CONFLICT,
          error: 'Conflict',
          message: error.message,
        });
      } else {
        // Let the global error handler manage other errors
        throw error;
      }
    }
  }
}

Explanation:

  • createUserSchema: Uses zod for robust input validation. This ensures that incoming request bodies conform to our expected structure and constraints (e.g., minimum password length, valid email format).
  • this.userService.createUser(...): Delegates the actual user creation logic to the UserService.
  • Response: Sends a 201 Created status code and the newly created user object (without the password hash).
  • Error Handling: Catches errors from the UserService or Zod validation errors. It provides specific HTTP status codes and messages for common issues like validation failures (400 Bad Request) or conflicts (409 Conflict) if username/email already exists. Other errors are re-thrown to be handled by our global error handler.

Finally, define the UserRoutes to expose the createUser method via an HTTP endpoint.

// src/modules/user/user.routes.ts
import { FastifyInstance } from 'fastify';
import { UserController } from './user.controller';

export async function userRoutes(fastify: FastifyInstance) {
  const userController = new UserController();

  // Route to create a new user
  fastify.post('/', userController.createUser.bind(userController)); // .bind(userController) ensures 'this' context
}

Explanation:

  • fastify.post('/', ...): Registers a POST endpoint at the root of the /api/v1/users prefix (as configured in src/routes.ts).
  • userController.createUser.bind(userController): When a class method is passed as a callback, its this context can be lost. bind(userController) ensures that this inside createUser correctly refers to the userController instance.

3.10. Test User Creation Endpoint

Now, let’s test our new API endpoint to create a user and verify data persistence.

a) Testing This Component

  1. Ensure your Fastify application is running:

    npm run dev
    
  2. Send a POST request to create a user. You can use curl, Postman, or Insomnia.

    Example using curl:

    curl -X POST \
      -H "Content-Type: application/json" \
      -d '{
        "username": "johndoe",
        "email": "john.doe@example.com",
        "password": "SecurePassword123!"
      }' \
      http://localhost:3000/api/v1/users
    

    Expected Successful Response (Status 201 Created):

    {
      "message": "User created successfully",
      "user": {
        "id": "a1b2c3d4-e5f6-7890-1234-567890abcdef",
        "username": "johndoe",
        "email": "john.doe@example.com",
        "createdAt": "2026-01-08T12:00:00.000Z",
        "updatedAt": "2026-01-08T12:00:00.000Z"
      }
    }
    

    (The id, createdAt, and updatedAt will be actual values from your database.)

  3. Verify the user in the database. Connect to your PostgreSQL database (e.g., via psql or a GUI tool) and query the users table:

    SELECT id, username, email, "createdAt", "updatedAt" FROM users;
    

    You should see the newly created user’s data. Note that passwordHash is not included by default in the SELECT * due to select: false in the entity, and you’d need to explicitly select it if you wanted to see it:

    SELECT "passwordHash" FROM users WHERE username = 'johndoe';
    

    You should see the bcrypt hashed password.

  4. Test error cases:

    • Missing fields: Send a request without username, email, or password. You should get a 400 Bad Request with Zod validation details.
    curl -X POST \
      -H "Content-Type: application/json" \
      -d '{
        "username": "testuser",
        "email": "invalid"
      }' \
      http://localhost:3000/api/v1/users
    
    • Duplicate username/email: Try to create another user with johndoe as the username or john.doe@example.com as the email. You should get a 409 Conflict error.
    curl -X POST \
      -H "Content-Type: application/json" \
      -d '{
        "username": "johndoe",
        "email": "another@example.com",
        "password": "AnotherSecurePassword123!"
      }' \
      http://localhost:3000/api/v1/users
    

    This confirms our validation and conflict handling are working as expected.

Production Considerations

Integrating a database requires careful consideration for production environments.

  • Error Handling: Ensure all database operations are wrapped in try-catch blocks. Log specific database errors with sufficient detail (but avoid logging sensitive data). Our initializeDatabase function already includes a process exit on connection failure, preventing a half-functional service.
  • Performance:
    • Connection Pooling: TypeORM manages a connection pool by default, which is efficient. Avoid creating new DataSource instances per request.
    • Indexing: For frequently queried columns (like username and email for lookup), database indexes are crucial. TypeORM can define indexes on entities (e.g., @Index({ unique: true })). We’ve implicitly created unique indexes with unique: true on columns.
    • N+1 Problem: Be mindful of the N+1 query problem, especially with relations. TypeORM provides eager/lazy loading and leftJoinAndSelect to optimize queries.
  • Security:
    • Environment Variables: Never hardcode database credentials. Use environment variables. In production, leverage services like AWS Secrets Manager or HashiCorp Vault.
    • Password Hashing: Always hash passwords with strong, slow hashing algorithms like bcrypt (which we are using). Never store plain text passwords.
    • SQL Injection: TypeORM’s query builder and repository methods inherently prevent SQL injection by using parameterized queries. Avoid raw SQL queries unless absolutely necessary and ensure proper sanitization.
    • synchronize: false: As emphasized, synchronize: true is for development only. Always use migrations for schema changes in production.
    • select: false for sensitive fields: Use this TypeORM feature to prevent sensitive data (like passwordHash) from being accidentally returned in queries.
  • Logging and Monitoring: Log database connection status, query execution times (if slow), and all database errors. Integrate with a monitoring system to alert on connection issues or high error rates. Our logging: ['query', 'error'] in AppDataSource is a good start.
  • Backups: Establish a robust database backup and recovery strategy.
  • Scalability: As your application grows, consider database scaling strategies like read replicas for read-heavy workloads or sharding for very large datasets.

Code Review Checkpoint

At this point, you’ve significantly enhanced your application’s capabilities by adding data persistence.

Summary of what was built:

  • A local PostgreSQL database running in Docker.
  • TypeORM configured to connect to PostgreSQL.
  • A User entity defined with essential fields and TypeORM decorators.
  • A migration system set up with ormconfig.ts and package.json scripts.
  • The initial users table created via a TypeORM migration.
  • The Fastify application now connects to the database on startup and includes a database health check.
  • A UserService to encapsulate user-related database logic, including password hashing and duplicate checks.
  • A UserController and corresponding routes to expose a POST /api/v1/users endpoint for creating new users, with zod validation.

Files created/modified:

  • docker-compose.yml
  • .env (updated)
  • package.json (updated with TypeORM scripts and new dependencies)
  • src/config/database.ts
  • src/database/entities/User.ts
  • src/database/migrations/<timestamp>-InitialUserTable.ts
  • ormconfig.ts
  • src/app.ts (updated for DB connection and health check)
  • src/routes.ts (updated to include user routes)
  • src/modules/user/user.service.ts
  • src/modules/user/user.controller.ts
  • src/modules/user/user.routes.ts

This chapter provides a solid foundation for all future data-driven features in our application.

Common Issues & Solutions

  1. PostgreSQL container not starting or accessible:

    • Issue: docker compose up -d fails, or docker ps shows the db container in Exited state.
    • Solution:
      • Check docker compose logs db for error messages. Common issues include port conflicts (another service using 5432), incorrect environment variables, or insufficient disk space.
      • Ensure Docker Desktop (or your Docker daemon) is running.
      • Try docker compose down --volumes to stop and remove all associated containers and volumes, then docker compose up -d again.
  2. TypeORM connection errors:

    • Issue: “Error during database initialization: ConnectTimeoutError”, “authentication failed for user ‘myuser’”, “database ‘mydatabase’ does not exist”.
    • Solution:
      • Verify all DB_HOST, DB_PORT, DB_USER, DB_PASSWORD, DB_NAME in your .env file are correct and match the docker-compose.yml configuration.
      • Ensure the PostgreSQL container is running and healthy (docker ps).
      • Check your src/config/database.ts to ensure environment variables are being loaded correctly and parseInt is used for DB_PORT.
      • Confirm the POSTGRES_DB name in docker-compose.yml is correct.
  3. Migration issues:

    • Issue: npm run typeorm:migration:generate doesn’t create a file, or npm run typeorm:migration:run fails.
    • Solution:
      • Generate issue: Ensure your User entity is correctly defined and imported into ormconfig.ts. Check for TypeScript compilation errors.
      • Run issue: Check the error message. It might indicate SQL syntax errors in the migration file (if you edited it manually), or a connection issue to the database. Ensure the uuid-ossp extension is enabled if you’re using UUIDs.
      • If you manually changed the database schema outside of migrations, TypeORM might get confused. In development, you might need to docker compose down --volumes and restart everything to get a clean slate.
  4. User entity not found or Cannot find module './database/entities/User':

    • Issue: TypeScript or runtime errors related to importing the User entity.
    • Solution:
      • Verify the file path in src/config/database.ts and ormconfig.ts is correct: ../database/entities/User (relative to config) or ./src/database/entities/User (relative to root for ormconfig.ts).
      • Ensure User.ts exists in the specified location.
      • Check for typos in the import statement or file name.

Testing & Verification

To ensure everything is correctly set up and working:

  1. Start your Dockerized PostgreSQL database:

    docker compose up -d
    

    Verify it’s running and healthy using docker ps.

  2. Run database migrations:

    npm run typeorm:migration:run
    

    Ensure it reports that the InitialUserTable migration ran successfully. If it says No migrations were found in the database, it means the table was already created, which is fine.

  3. Start your Fastify application in development mode:

    npm run dev
    

    Confirm you see “Database connection initialized successfully!” in the console.

  4. Test the health check endpoint:

    curl http://localhost:3000/health
    

    Expected output: {"status":"ok","database":"connected"}.

  5. Test the user creation endpoint:

    curl -X POST \
      -H "Content-Type: application/json" \
      -d '{
        "username": "testuser",
        "email": "test@example.com",
        "password": "Password123!"
      }' \
      http://localhost:3000/api/v1/users
    

    Expected output: A 201 Created response with the new user’s details (excluding password hash).

  6. Verify data persistence: Connect to your PostgreSQL database (e.g., using psql) and query the users table:

    docker exec -it myapp-postgres psql -U myuser -d mydatabase
    SELECT id, username, email FROM users;
    

    You should see the testuser entry.

Summary & Next Steps

Congratulations! You’ve successfully integrated PostgreSQL into your Node.js Fastify application, established a robust migration system with TypeORM, and implemented your first data-persisting API endpoint. This is a monumental step, as almost every real-world application relies on persistent storage.

In this chapter, we covered:

  • Setting up a local PostgreSQL database using Docker Compose.
  • Configuring TypeORM as our ORM, including its DataSource and CLI.
  • Defining our first User entity and generating/running a migration to create the users table.
  • Integrating the database connection into our Fastify application startup and adding a health check.
  • Creating a UserService and UserController to handle user creation with password hashing and input validation.

With a solid database foundation in place, our application is now ready to manage user data. In the next chapter, Chapter 5: Authentication & Authorization: JWTs and Role-Based Access Control, we will build upon this foundation to implement secure user authentication using JSON Web Tokens (JWTs) and introduce role-based access control (RBAC) to protect our API endpoints. Get ready to secure your application!