Welcome to Chapter 4 of our Node.js backend project series! So far, we’ve established a robust project structure, set up our Fastify server, and implemented essential middleware for request handling and error management. While our API can process requests, it currently lacks the ability to store and retrieve data persistently. This severely limits its utility, as any information processed is lost once the server restarts.
In this chapter, we will bridge this gap by integrating a powerful relational database: PostgreSQL. PostgreSQL is an open-source object-relational database system known for its reliability, feature robustness, and performance. To interact with PostgreSQL efficiently from our Node.js application, we will use TypeORM, a modern Object-Relational Mapper (ORM). TypeORM provides a clean, object-oriented way to work with databases, offering features like entity management, query builders, and crucially, database migrations. Migrations allow us to evolve our database schema in a controlled, versioned manner, which is essential for collaborative development and seamless deployments.
By the end of this chapter, you will have a fully functional PostgreSQL database running locally via Docker, connected to our Node.js application through TypeORM. We will define our first data model (a User entity), set up a robust migration system, and implement a basic API endpoint to demonstrate data persistence. This foundation is critical for building any real-world application, enabling us to store user data, application state, and more.
Planning & Design
Before diving into the code, let’s outline the architecture and design decisions for integrating PostgreSQL.
Component Architecture
Our application will interact with the PostgreSQL database through TypeORM. The Fastify application will initiate the TypeORM data source, which then manages the connection pool and all database operations.
Database Schema (Initial)
We’ll start with a simple User entity to demonstrate data persistence. This entity will represent a user in our system.
User Table Schema:
id: Primary key, auto-generated UUID.username: Unique string, not null.email: Unique string, not null.passwordHash: String, not null (we’ll store hashed passwords for security).createdAt: Timestamp, auto-generated on creation.updatedAt: Timestamp, auto-updated on modification.
File Structure
We’ll introduce new directories and files to manage our database integration:
.
├── src/
│ ├── config/
│ │ └── database.ts # TypeORM data source configuration
│ ├── database/
│ │ ├── entities/
│ │ │ └── User.ts # TypeORM User entity definition
│ │ └── migrations/ # Directory for database migration files
│ ├── modules/
│ │ └── user/
│ │ ├── user.controller.ts # User-related API logic
│ │ ├── user.routes.ts # User-related API routes
│ │ └── user.service.ts # User-related business logic
│ └── app.ts # Main application entry point (updated for DB connection)
├── docker-compose.yml # Docker Compose configuration for local services (PostgreSQL)
├── ormconfig.ts # TypeORM CLI configuration
└── .env # Environment variables (including database credentials)
Step-by-Step Implementation
Let’s start building our data persistence layer.
3.1. Setup Local PostgreSQL with Docker Compose
To ensure a consistent development environment and avoid local installation complexities, we’ll run PostgreSQL in a Docker container.
a) Setup/Configuration
Create a docker-compose.yml file at the root of your project:
# docker-compose.yml
version: '3.8'
services:
db:
image: postgres:16-alpine
container_name: myapp-postgres
restart: always
environment:
POSTGRES_DB: ${DB_NAME}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
ports:
- "5432:5432"
volumes:
- db_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER} -d ${DB_NAME}"]
interval: 5s
timeout: 5s
retries: 5
volumes:
db_data:
Explanation:
version: '3.8': Specifies the Docker Compose file format version.services: Defines the containers.db: Our PostgreSQL service.image: postgres:16-alpine: Uses the official PostgreSQL image, version 16, with the lightweight Alpine Linux base.container_name: myapp-postgres: Assigns a readable name to the container.restart: always: Ensures the container restarts automatically if it stops.environment: Sets environment variables inside the container. We’re using placeholders (${...}) which Docker Compose will resolve from your.envfile. These are crucial for configuring the PostgreSQL database (name, user, password).ports: - "5432:5432": Maps port 5432 (default PostgreSQL port) from the container to port 5432 on your host machine, allowing your Node.js app to connect.volumes: - db_data:/var/lib/postgresql/data: Persists the database data to a named Docker volume (db_data). This ensures your data isn’t lost if the container is removed or recreated.healthcheck: Defines how Docker should check if the database is ready. It runspg_isreadyinside the container, which is a PostgreSQL utility to check connection status.
b) Core Implementation
Now, let’s start the PostgreSQL container. Open your terminal in the project root and run:
docker compose up -d
Explanation:
docker compose up: Starts the services defined indocker-compose.yml.-d: Runs the containers in detached mode (in the background).
c) Testing This Component
To verify that your PostgreSQL container is running:
docker ps
You should see an entry similar to this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<container-id> postgres:16-alpine "docker-entrypoint.s…" About a minute ago Up About a minute (healthy) 0.0.0.0:5432->5432/tcp myapp-postgres
The STATUS should indicate (healthy) after a few seconds, meaning PostgreSQL is ready to accept connections.
3.2. Install Database Dependencies
We need to install the pg driver for PostgreSQL and typeorm for our ORM. We’ll also use dotenv to manage environment variables.
a) Setup/Configuration
Run the following command in your terminal:
npm install pg typeorm dotenv
npm install --save-dev @types/pg
Explanation:
pg: The official Node.js client for PostgreSQL. TypeORM uses this under the hood.typeorm: The main TypeORM library.dotenv: A zero-dependency module that loads environment variables from a.envfile intoprocess.env.@types/pg: TypeScript type definitions for thepglibrary, essential for type safety.
3.3. Configure Environment Variables
We’ll store our database credentials and connection details in a .env file. This keeps sensitive information out of our codebase and allows easy configuration changes between environments (development, testing, production).
a) Setup/Configuration
Create or update your .env file at the root of your project:
# .env
# ... other existing environment variables ...
# Database Configuration (PostgreSQL)
DB_HOST=localhost
DB_PORT=5432
DB_USER=myuser
DB_PASSWORD=mypassword
DB_NAME=mydatabase
Explanation:
DB_HOST: The hostname of your database server. For Docker, it’slocalhostwhen connecting from the host machine.DB_PORT: The port PostgreSQL is listening on.DB_USER: The username for connecting to the database.DB_PASSWORD: The password for the database user.DB_NAME: The name of the database to connect to.
Security Consideration: In production, these values should never be hardcoded or stored directly in a .env file checked into version control. We’ll cover secure secrets management in later chapters (e.g., AWS Secrets Manager, Vault). For local development, .env is appropriate.
3.4. Create TypeORM Data Source Configuration
This file will define how TypeORM connects to our PostgreSQL database.
a) Setup/Configuration
Create a new file src/config/database.ts:
// src/config/database.ts
import 'dotenv/config'; // Load environment variables from .env
import { DataSource } from 'typeorm';
import { User } from '../database/entities/User'; // Will be created next
// Ensure all required environment variables are present
const { DB_HOST, DB_PORT, DB_USER, DB_PASSWORD, DB_NAME } = process.env;
if (!DB_HOST || !DB_PORT || !DB_USER || !DB_PASSWORD || !DB_NAME) {
console.error('Missing one or more database environment variables. Please check your .env file.');
process.exit(1); // Exit the process if critical env vars are missing
}
export const AppDataSource = new DataSource({
type: 'postgres',
host: DB_HOST,
port: parseInt(DB_PORT, 10), // Ensure port is a number
username: DB_USER,
password: DB_PASSWORD,
database: DB_NAME,
synchronize: false, // NEVER use synchronize: true in production!
logging: ['query', 'error'], // Log SQL queries and errors
entities: [User], // List of entities (data models)
migrations: [__dirname + '/../database/migrations/*.ts'], // Path to migration files
subscribers: [],
});
/**
* Initializes the database connection.
* @returns {Promise<void>} A promise that resolves when the connection is established.
*/
export const initializeDatabase = async (): Promise<void> => {
try {
await AppDataSource.initialize();
console.log('Database connection initialized successfully!');
} catch (error) {
console.error('Error during database initialization:', error);
process.exit(1); // Exit if DB connection fails
}
};
Explanation:
import 'dotenv/config';: This line loads environment variables from your.envfile intoprocess.env. It must be at the top.DataSource: The core TypeORM class for managing database connections.- Environment Variable Validation: We add a check to ensure all necessary DB environment variables are present. If not, the application logs an error and exits, preventing runtime issues.
type: 'postgres': Specifies PostgreSQL as the database type.host,port,username,password,database: These are populated directly from ourprocess.envvariables.parseInt(DB_PORT, 10)converts the string port to a number.synchronize: false: CRITICAL FOR PRODUCTION. Whentrue, TypeORM automatically creates database tables and columns based on your entities. This is convenient for rapid development but highly dangerous in production as it can lead to data loss or unexpected schema changes. We will use migrations for schema management.logging: ['query', 'error']: Configures TypeORM to log executed SQL queries and any errors, which is invaluable for debugging.entities: [User]: An array specifying all TypeORM entities (models) for our application. We’ll createUsernext.migrations: [...]: Defines the path where TypeORM should look for migration files.initializeDatabase: An async function that attempts to establish the database connection. It includes robust error handling, logging success or failure, and exiting the process on failure to prevent the application from running without a database.
3.5. Define the User Entity
Now, let’s create our first TypeORM entity, representing the User table.
a) Setup/Configuration
Create a new directory src/database/entities/ and inside it, create User.ts:
// src/database/entities/User.ts
import { Entity, PrimaryGeneratedColumn, Column, CreateDateColumn, UpdateDateColumn } from 'typeorm';
@Entity('users') // Specifies that this class is a TypeORM entity and maps it to the 'users' table
export class User {
@PrimaryGeneratedColumn('uuid') // Auto-generates a UUID for the primary key
id!: string; // '!' asserts that id will be initialized by TypeORM
@Column({ type: 'varchar', length: 255, unique: true, nullable: false })
username!: string;
@Column({ type: 'varchar', length: 255, unique: true, nullable: false })
email!: string;
@Column({ type: 'varchar', length: 255, nullable: false, select: false }) // select: false prevents it from being returned by default
passwordHash!: string;
@CreateDateColumn({ type: 'timestamp with time zone', default: () => 'CURRENT_TIMESTAMP' })
createdAt!: Date;
@UpdateDateColumn({ type: 'timestamp with time zone', default: () => 'CURRENT_TIMESTAMP', onUpdate: 'CURRENT_TIMESTAMP' })
updatedAt!: Date;
}
Explanation:
@Entity('users'): Decorator markingUseras an entity and specifying its table name asusers.@PrimaryGeneratedColumn('uuid'): Definesidas the primary key, automatically generating a UUID value for new records.@Column(...): Decorator for defining regular columns.type: Specifies the database column type (e.g.,varchar,timestamp with time zone).length: Forvarchar, specifies maximum length.unique: true: Enforces uniqueness forusernameandemail.nullable: false: Makes the column required.select: false: ForpasswordHash, this is a security best practice. It means that by default, when you query for aUser, thepasswordHashwill not be included in the results. You’d have to explicitlyselectit if needed.
@CreateDateColumn: Automatically sets thecreatedAttimestamp when a new record is created.@UpdateDateColumn: Automatically updates theupdatedAttimestamp whenever a record is modified.onUpdate: 'CURRENT_TIMESTAMP'ensures this behavior at the database level.!: The non-null assertion operator is used here because TypeORM initializes these properties, even though TypeScript might think they are not initialized in the constructor.
3.6. Configure TypeORM CLI for Migrations
TypeORM provides a powerful CLI tool for generating and running migrations. We need to tell the CLI where to find our DataSource configuration and our entities/migrations.
a) Setup/Configuration
Create a file ormconfig.ts at the root of your project:
// ormconfig.ts
import 'dotenv/config'; // Load environment variables
import { DataSource } from 'typeorm';
import { User } from './src/database/entities/User';
const { DB_HOST, DB_PORT, DB_USER, DB_PASSWORD, DB_NAME } = process.env;
if (!DB_HOST || !DB_PORT || !DB_USER || !DB_PASSWORD || !DB_NAME) {
console.error('Missing one or more database environment variables. Please check your .env file.');
process.exit(1);
}
const ormConfig = new DataSource({
type: 'postgres',
host: DB_HOST,
port: parseInt(DB_PORT, 10),
username: DB_USER,
password: DB_PASSWORD,
database: DB_NAME,
synchronize: false,
logging: ['query', 'error'],
entities: [User],
migrations: [__dirname + '/src/database/migrations/*.ts'],
subscribers: [],
});
export default ormConfig;
Explanation:
- This
ormconfig.tsis very similar tosrc/config/database.tsbecause the TypeORM CLI needs its ownDataSourceinstance to perform operations like generating or running migrations. - It explicitly points to our
Userentity and themigrationsdirectory.
Next, add scripts to your package.json to easily run TypeORM CLI commands.
b) Core Implementation
Open your package.json and add the following scripts under the scripts section:
// package.json
{
"name": "my-fastify-app",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "node dist/app.js",
"dev": "ts-node-dev --respawn --transpile-only src/app.ts",
"build": "tsc",
"lint": "eslint . --ext .ts",
"lint:fix": "eslint . --ext .ts --fix",
"prettier": "prettier --check \"src/**/*.ts\"",
"prettier:fix": "prettier --write \"src/**/*.ts\"",
"test": "jest",
"test:watch": "jest --watch",
"typeorm": "typeorm-ts-node-commonjs -d ./ormconfig.ts",
"typeorm:migration:generate": "npm run typeorm migration:generate",
"typeorm:migration:create": "npm run typeorm migration:create",
"typeorm:migration:run": "npm run typeorm migration:run",
"typeorm:migration:revert": "npm run typeorm migration:revert"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"bcrypt": "^5.1.1",
"dotenv": "^16.4.5",
"fastify": "^4.26.1",
"fastify-plugin": "^4.5.1",
"fastify-swagger": "^5.2.0",
"fastify-swagger-ui": "^4.0.0",
"helmet": "^7.1.0",
"http-status-codes": "^2.3.0",
"pino": "^8.19.0",
"pg": "^8.11.3",
"typeorm": "^0.3.20",
"zod": "^3.22.4"
},
"devDependencies": {
"@types/bcrypt": "^5.0.2",
"@types/jest": "^29.5.12",
"@types/node": "^20.11.20",
"@types/pg": "^8.11.0",
"@types/supertest": "^6.0.2",
"@typescript-eslint/eslint-plugin": "^7.0.2",
"@typescript-eslint/parser": "^7.0.2",
"eslint": "^8.57.0",
"jest": "^29.7.0",
"prettier": "^3.2.5",
"supertest": "^6.3.4",
"ts-jest": "^29.1.2",
"ts-node": "^10.9.2",
"ts-node-dev": "^2.0.0",
"typeorm-ts-node-commonjs": "^4.0.0",
"typescript": "^5.3.3"
}
}
Explanation:
"typeorm": "typeorm-ts-node-commonjs -d ./ormconfig.ts": This is the base command for running TypeORM CLI. We usetypeorm-ts-node-commonjsto execute TypeORM commands directly with TypeScript files. The-dflag specifies the path to ourDataSourceconfiguration file."typeorm:migration:generate": Generates a new migration file based on the differences between your entities and the current database schema."typeorm:migration:create": Creates an empty migration file (useful for manual schema changes)."typeorm:migration:run": Executes all pending migrations."typeorm:migration:revert": Reverts the last applied migration (use with caution in production!).
3.7. Generate and Run First Migration
Now that TypeORM is configured, let’s create our users table using a migration.
a) Core Implementation
First, generate the migration:
npm run typeorm:migration:generate src/database/migrations/InitialUserTable
Explanation:
- This command will create a new file in
src/database/migrations/with a timestamp prefix (e.g.,1704672000000-InitialUserTable.ts). - TypeORM automatically inspects your
Userentity and generates the SQLCREATE TABLEstatements in theupmethod andDROP TABLEstatements in thedownmethod.
Open the generated migration file (e.g., src/database/migrations/1704672000000-InitialUserTable.ts). It should look similar to this:
// src/database/migrations/1704672000000-InitialUserTable.ts
import { MigrationInterface, QueryRunner, Table } from 'typeorm';
export class InitialUserTable1704672000000 implements MigrationInterface {
name = 'InitialUserTable1704672000000'; // Unique name for the migration
public async up(queryRunner: QueryRunner): Promise<void> {
await queryRunner.createTable(
new Table({
name: 'users',
columns: [
{
name: 'id',
type: 'uuid',
isPrimary: true,
generationStrategy: 'uuid',
default: 'uuid_generate_v4()', // PostgreSQL specific for UUID generation
},
{
name: 'username',
type: 'varchar',
length: '255',
isUnique: true,
isNullable: false,
},
{
name: 'email',
type: 'varchar',
length: '255',
isUnique: true,
isNullable: false,
},
{
name: 'passwordHash',
type: 'varchar',
length: '255',
isNullable: false,
},
{
name: 'createdAt',
type: 'timestamp with time zone',
default: 'CURRENT_TIMESTAMP',
isNullable: false,
},
{
name: 'updatedAt',
type: 'timestamp with time zone',
default: 'CURRENT_TIMESTAMP',
onUpdate: 'CURRENT_TIMESTAMP',
isNullable: false,
},
],
}),
true, // If set to `true`, the table will be created only if it does not exist.
);
// If you need uuid_generate_v4(), ensure the pgcrypto extension is enabled
// You might need to add this if it's not already enabled in your PostgreSQL instance
await queryRunner.query(`CREATE EXTENSION IF NOT EXISTS "uuid-ossp"`);
}
public async down(queryRunner: QueryRunner): Promise<void> {
await queryRunner.dropTable('users');
await queryRunner.query(`DROP EXTENSION IF EXISTS "uuid-ossp"`);
}
}
Note on uuid_generate_v4() and uuid-ossp: TypeORM might generate uuid_generate_v4() as the default for UUID columns. This function is part of the uuid-ossp extension in PostgreSQL. If you encounter errors about uuid_generate_v4() not existing, you need to enable this extension. I’ve added await queryRunner.query('CREATE EXTENSION IF NOT EXISTS "uuid-ossp"'); to the up method to handle this automatically.
Now, run the migration to create the table in your database:
npm run typeorm:migration:run
You should see output indicating the migration ran successfully.
b) Testing This Component
To verify the tables were created:
Connect to your PostgreSQL database: You can use a GUI tool like DBeaver, TablePlus, or DataGrip.
- Host:
localhost - Port:
5432 - User:
myuser - Password:
mypassword - Database:
mydatabase
Alternatively, from your terminal:
docker exec -it myapp-postgres psql -U myuser -d mydatabase- Host:
List tables: Inside the
psqlprompt (or your GUI tool), run:\dtYou should see
public.usersandpublic.migrationstables.Describe the
userstable:\d usersVerify the columns (
id,username,email,passwordHash,createdAt,updatedAt) match ourUserentity.Check the
migrationstable:SELECT * FROM migrations;This table tracks which migrations have been applied. You should see an entry for
InitialUserTable.
3.8. Integrate Database Connection into Application
Now, let’s make sure our Fastify application connects to the database when it starts.
a) Core Implementation
Modify your main application entry point, src/app.ts, to call initializeDatabase before starting the Fastify server.
// src/app.ts
import Fastify from 'fastify';
import { AppDataSource, initializeDatabase } from './config/database'; // Import AppDataSource and initializeDatabase
import { registerPlugins } from './plugins';
import { registerRoutes } from './routes';
import { setupErrorHandling } from './utils/errorHandler';
import { logger } from './utils/logger';
import { StatusCodes } from 'http-status-codes';
const app = Fastify({
logger: false, // Using pino for custom logging
});
const start = async () => {
try {
// 1. Initialize Database Connection
await initializeDatabase();
// 2. Register Plugins (e.g., Helmet, Swagger, etc.)
await registerPlugins(app);
// 3. Register Routes
registerRoutes(app);
// 4. Setup Global Error Handling
setupErrorHandling(app);
// 5. Add a basic health check endpoint for the application and database
app.get('/health', async (request, reply) => {
try {
// Check database connection status
await AppDataSource.query('SELECT 1'); // Simple query to check DB connection
reply.status(StatusCodes.OK).send({ status: 'ok', database: 'connected' });
} catch (dbError) {
logger.error('Database health check failed:', dbError);
reply.status(StatusCodes.INTERNAL_SERVER_ERROR).send({ status: 'error', database: 'disconnected' });
}
});
// Start the Fastify server
const port = parseInt(process.env.PORT || '3000', 10);
const host = process.env.HOST || '0.0.0.0';
await app.listen({ port, host });
logger.info(`Server listening on ${host}:${port}`);
logger.info(`For API documentation, visit http://${host}:${port}/documentation`);
} catch (err) {
logger.error('Application failed to start:', err);
process.exit(1);
}
};
start();
Explanation:
import { AppDataSource, initializeDatabase } from './config/database';: We import our database configuration.await initializeDatabase();: This line is crucial. It calls our function to connect to PostgreSQL before the Fastify server starts listening for requests. This ensures our application can’t serve requests without a working database connection.- Health Check Endpoint (
/health): We’ve added a simpleGET /healthendpoint. This is a best practice for monitoring applications. It checks if the Fastify server is running and also performs a simple query (SELECT 1) to ensure the database connection is active. This is invaluable for load balancers and container orchestration platforms (like Kubernetes or ECS) to determine if your application instance is healthy. - Error handling for database connection failure is included, causing the process to exit if the database can’t be reached, preventing a “half-dead” application.
b) Testing This Component
Start your Fastify application:
npm run devYou should see
Database connection initialized successfully!in your console, followed by the server listening message.Test the health check endpoint: Open your browser or use
curl:curl http://localhost:3000/healthYou should receive a response:
{"status":"ok","database":"connected"}If you stop your PostgreSQL container (
docker compose stop db) and then restart your Node.js app, theinitializeDatabase()call will fail, and the application will exit with an error, demonstrating the robust error handling.
3.9. Implement a Basic User Service (CRUD - Create only for now)
Now that our database is connected, let’s create a basic API endpoint to create users. This will demonstrate how to interact with TypeORM to persist data.
a) Setup/Configuration
Create a directory src/modules/user and inside it, create user.service.ts, user.controller.ts, and user.routes.ts.
First, let’s update src/routes.ts to include user routes:
// src/routes.ts
import { FastifyInstance } from 'fastify';
import { healthRoutes } from './modules/health/health.routes'; // Example from previous chapter (if exists)
import { userRoutes } from './modules/user/user.routes'; // Import user routes
export function registerRoutes(app: FastifyInstance) {
// Register health routes (if you have them)
// app.register(healthRoutes, { prefix: '/api/v1/health' });
// Register user routes
app.register(userRoutes, { prefix: '/api/v1/users' });
// Add more routes here as your application grows
}
Next, let’s define the UserService which will encapsulate our database operations for users.
b) Core Implementation
Create src/modules/user/user.service.ts:
// src/modules/user/user.service.ts
import { AppDataSource } from '../../config/database';
import { User } from '../../database/entities/User';
import { logger } from '../../utils/logger';
import bcrypt from 'bcrypt';
// Define an interface for the data required to create a user
interface CreateUserPayload {
username: string;
email: string;
passwordPlainText: string; // Plain text password for hashing
}
export class UserService {
private userRepository = AppDataSource.getRepository(User);
private readonly SALT_ROUNDS = 10; // For bcrypt hashing
/**
* Creates a new user in the database.
* @param payload - User data including username, email, and plain text password.
* @returns {Promise<User>} The newly created user entity (without passwordHash).
* @throws {Error} If a user with the given username or email already exists, or other database errors.
*/
public async createUser(payload: CreateUserPayload): Promise<Omit<User, 'passwordHash'>> {
const { username, email, passwordPlainText } = payload;
try {
// Check if username or email already exists
const existingUser = await this.userRepository
.createQueryBuilder('user')
.where('user.username = :username', { username })
.orWhere('user.email = :email', { email })
.getOne();
if (existingUser) {
if (existingUser.username === username) {
throw new Error('Username already taken.');
}
if (existingUser.email === email) {
throw new Error('Email already registered.');
}
}
// Hash the password securely
const passwordHash = await bcrypt.hash(passwordPlainText, this.SALT_ROUNDS);
// Create a new User instance
const newUser = this.userRepository.create({
username,
email,
passwordHash,
});
// Save the user to the database
const savedUser = await this.userRepository.save(newUser);
logger.info(`User created: ${savedUser.id}`);
// Return the user without the password hash for security
const { passwordHash: _, ...userWithoutPassword } = savedUser;
return userWithoutPassword;
} catch (error) {
logger.error(`Error creating user: ${error.message}`, { username, email });
throw error; // Re-throw to be caught by the controller/error handler
}
}
// Other user-related methods (e.g., findById, findByEmail, update, delete) will go here in future chapters
}
Explanation:
userRepository = AppDataSource.getRepository(User);: This gets the repository for theUserentity, which provides methods for interacting with theuserstable.createUser: An asynchronous method to handle user creation.- Duplicate Check: Before creating a user, it queries the database to ensure the
usernameoremailisn’t already in use. This prevents unique constraint violations at the database level and provides a more user-friendly error message. bcrypt.hash(...): We usebcryptto securely hash the user’s plain-text password. Never store plain-text passwords in the database.SALT_ROUNDSdetermines the computational cost of hashing; 10 is a good default for most applications. You’ll need to installbcrypt:npm install bcrypt && npm install --save-dev @types/bcrypt.this.userRepository.create(...): Creates a newUserinstance, but doesn’t save it to the database yet.this.userRepository.save(newUser): Persists thenewUserinstance to the database. If the entity has anid, it performs an update; otherwise, it performs an insert.Omit<User, 'passwordHash'>: This TypeScript utility type ensures that the returnedUserobject explicitly excludes thepasswordHashproperty, adhering to security best practices.- Error Handling & Logging: Includes
try-catchblocks to log database errors and re-throw them, allowing our centralized error handler to catch them.
Next, create the UserController to handle incoming HTTP requests and delegate to the UserService.
// src/modules/user/user.controller.ts
import { FastifyReply, FastifyRequest } from 'fastify';
import { StatusCodes } from 'http-status-codes';
import { UserService } from './user.service';
import { logger } from '../../utils/logger';
import { z } from 'zod'; // For input validation
// Define the schema for user creation request body
const createUserSchema = z.object({
username: z.string().min(3).max(50),
email: z.string().email(),
password: z.string().min(8).max(100), // Minimum 8 characters for password
});
export class UserController {
private userService: UserService;
constructor() {
this.userService = new UserService();
}
/**
* Handles the creation of a new user.
* @param request - Fastify request object.
* @param reply - Fastify reply object.
*/
public async createUser(request: FastifyRequest, reply: FastifyReply): Promise<void> {
try {
// Validate request body using Zod
const userData = createUserSchema.parse(request.body);
// Call the service to create the user
const newUser = await this.userService.createUser({
username: userData.username,
email: userData.email,
passwordPlainText: userData.password,
});
// Respond with the created user (excluding password hash)
reply.status(StatusCodes.CREATED).send({
message: 'User created successfully',
user: newUser,
});
} catch (error: any) {
logger.error('Error in UserController.createUser:', error);
// Centralized error handler will catch and format this
// For Zod validation errors, we can enrich the error object
if (error instanceof z.ZodError) {
reply.status(StatusCodes.BAD_REQUEST).send({
statusCode: StatusCodes.BAD_REQUEST,
error: 'Bad Request',
message: 'Validation failed',
details: error.errors,
});
} else if (error.message.includes('Username already taken') || error.message.includes('Email already registered')) {
reply.status(StatusCodes.CONFLICT).send({
statusCode: StatusCodes.CONFLICT,
error: 'Conflict',
message: error.message,
});
} else {
// Let the global error handler manage other errors
throw error;
}
}
}
}
Explanation:
createUserSchema: Useszodfor robust input validation. This ensures that incoming request bodies conform to our expected structure and constraints (e.g., minimum password length, valid email format).this.userService.createUser(...): Delegates the actual user creation logic to theUserService.- Response: Sends a
201 Createdstatus code and the newly created user object (without the password hash). - Error Handling: Catches errors from the
UserServiceor Zod validation errors. It provides specific HTTP status codes and messages for common issues like validation failures (400 Bad Request) or conflicts (409 Conflict) if username/email already exists. Other errors are re-thrown to be handled by our global error handler.
Finally, define the UserRoutes to expose the createUser method via an HTTP endpoint.
// src/modules/user/user.routes.ts
import { FastifyInstance } from 'fastify';
import { UserController } from './user.controller';
export async function userRoutes(fastify: FastifyInstance) {
const userController = new UserController();
// Route to create a new user
fastify.post('/', userController.createUser.bind(userController)); // .bind(userController) ensures 'this' context
}
Explanation:
fastify.post('/', ...): Registers a POST endpoint at the root of the/api/v1/usersprefix (as configured insrc/routes.ts).userController.createUser.bind(userController): When a class method is passed as a callback, itsthiscontext can be lost.bind(userController)ensures thatthisinsidecreateUsercorrectly refers to theuserControllerinstance.
3.10. Test User Creation Endpoint
Now, let’s test our new API endpoint to create a user and verify data persistence.
a) Testing This Component
Ensure your Fastify application is running:
npm run devSend a POST request to create a user. You can use
curl, Postman, or Insomnia.Example using
curl:curl -X POST \ -H "Content-Type: application/json" \ -d '{ "username": "johndoe", "email": "john.doe@example.com", "password": "SecurePassword123!" }' \ http://localhost:3000/api/v1/usersExpected Successful Response (Status 201 Created):
{ "message": "User created successfully", "user": { "id": "a1b2c3d4-e5f6-7890-1234-567890abcdef", "username": "johndoe", "email": "john.doe@example.com", "createdAt": "2026-01-08T12:00:00.000Z", "updatedAt": "2026-01-08T12:00:00.000Z" } }(The
id,createdAt, andupdatedAtwill be actual values from your database.)Verify the user in the database. Connect to your PostgreSQL database (e.g., via
psqlor a GUI tool) and query theuserstable:SELECT id, username, email, "createdAt", "updatedAt" FROM users;You should see the newly created user’s data. Note that
passwordHashis not included by default in theSELECT *due toselect: falsein the entity, and you’d need to explicitly select it if you wanted to see it:SELECT "passwordHash" FROM users WHERE username = 'johndoe';You should see the bcrypt hashed password.
Test error cases:
- Missing fields: Send a request without
username,email, orpassword. You should get a400 Bad Requestwith Zod validation details.
curl -X POST \ -H "Content-Type: application/json" \ -d '{ "username": "testuser", "email": "invalid" }' \ http://localhost:3000/api/v1/users- Duplicate username/email: Try to create another user with
johndoeas the username orjohn.doe@example.comas the email. You should get a409 Conflicterror.
curl -X POST \ -H "Content-Type: application/json" \ -d '{ "username": "johndoe", "email": "another@example.com", "password": "AnotherSecurePassword123!" }' \ http://localhost:3000/api/v1/usersThis confirms our validation and conflict handling are working as expected.
- Missing fields: Send a request without
Production Considerations
Integrating a database requires careful consideration for production environments.
- Error Handling: Ensure all database operations are wrapped in
try-catchblocks. Log specific database errors with sufficient detail (but avoid logging sensitive data). OurinitializeDatabasefunction already includes a process exit on connection failure, preventing a half-functional service. - Performance:
- Connection Pooling: TypeORM manages a connection pool by default, which is efficient. Avoid creating new
DataSourceinstances per request. - Indexing: For frequently queried columns (like
usernameandemailfor lookup), database indexes are crucial. TypeORM can define indexes on entities (e.g.,@Index({ unique: true })). We’ve implicitly created unique indexes withunique: trueon columns. - N+1 Problem: Be mindful of the N+1 query problem, especially with relations. TypeORM provides eager/lazy loading and
leftJoinAndSelectto optimize queries.
- Connection Pooling: TypeORM manages a connection pool by default, which is efficient. Avoid creating new
- Security:
- Environment Variables: Never hardcode database credentials. Use environment variables. In production, leverage services like AWS Secrets Manager or HashiCorp Vault.
- Password Hashing: Always hash passwords with strong, slow hashing algorithms like bcrypt (which we are using). Never store plain text passwords.
- SQL Injection: TypeORM’s query builder and repository methods inherently prevent SQL injection by using parameterized queries. Avoid raw SQL queries unless absolutely necessary and ensure proper sanitization.
synchronize: false: As emphasized,synchronize: trueis for development only. Always use migrations for schema changes in production.select: falsefor sensitive fields: Use this TypeORM feature to prevent sensitive data (likepasswordHash) from being accidentally returned in queries.
- Logging and Monitoring: Log database connection status, query execution times (if slow), and all database errors. Integrate with a monitoring system to alert on connection issues or high error rates. Our
logging: ['query', 'error']inAppDataSourceis a good start. - Backups: Establish a robust database backup and recovery strategy.
- Scalability: As your application grows, consider database scaling strategies like read replicas for read-heavy workloads or sharding for very large datasets.
Code Review Checkpoint
At this point, you’ve significantly enhanced your application’s capabilities by adding data persistence.
Summary of what was built:
- A local PostgreSQL database running in Docker.
- TypeORM configured to connect to PostgreSQL.
- A
Userentity defined with essential fields and TypeORM decorators. - A migration system set up with
ormconfig.tsandpackage.jsonscripts. - The initial
userstable created via a TypeORM migration. - The Fastify application now connects to the database on startup and includes a database health check.
- A
UserServiceto encapsulate user-related database logic, including password hashing and duplicate checks. - A
UserControllerand corresponding routes to expose aPOST /api/v1/usersendpoint for creating new users, withzodvalidation.
Files created/modified:
docker-compose.yml.env(updated)package.json(updated with TypeORM scripts and new dependencies)src/config/database.tssrc/database/entities/User.tssrc/database/migrations/<timestamp>-InitialUserTable.tsormconfig.tssrc/app.ts(updated for DB connection and health check)src/routes.ts(updated to include user routes)src/modules/user/user.service.tssrc/modules/user/user.controller.tssrc/modules/user/user.routes.ts
This chapter provides a solid foundation for all future data-driven features in our application.
Common Issues & Solutions
PostgreSQL container not starting or accessible:
- Issue:
docker compose up -dfails, ordocker psshows thedbcontainer inExitedstate. - Solution:
- Check
docker compose logs dbfor error messages. Common issues include port conflicts (another service using 5432), incorrect environment variables, or insufficient disk space. - Ensure Docker Desktop (or your Docker daemon) is running.
- Try
docker compose down --volumesto stop and remove all associated containers and volumes, thendocker compose up -dagain.
- Check
- Issue:
TypeORM connection errors:
- Issue: “Error during database initialization: ConnectTimeoutError”, “authentication failed for user ‘myuser’”, “database ‘mydatabase’ does not exist”.
- Solution:
- Verify all
DB_HOST,DB_PORT,DB_USER,DB_PASSWORD,DB_NAMEin your.envfile are correct and match thedocker-compose.ymlconfiguration. - Ensure the PostgreSQL container is running and healthy (
docker ps). - Check your
src/config/database.tsto ensure environment variables are being loaded correctly andparseIntis used forDB_PORT. - Confirm the
POSTGRES_DBname indocker-compose.ymlis correct.
- Verify all
Migration issues:
- Issue:
npm run typeorm:migration:generatedoesn’t create a file, ornpm run typeorm:migration:runfails. - Solution:
- Generate issue: Ensure your
Userentity is correctly defined and imported intoormconfig.ts. Check for TypeScript compilation errors. - Run issue: Check the error message. It might indicate SQL syntax errors in the migration file (if you edited it manually), or a connection issue to the database. Ensure the
uuid-osspextension is enabled if you’re using UUIDs. - If you manually changed the database schema outside of migrations, TypeORM might get confused. In development, you might need to
docker compose down --volumesand restart everything to get a clean slate.
- Generate issue: Ensure your
- Issue:
Userentity not found orCannot find module './database/entities/User':- Issue: TypeScript or runtime errors related to importing the
Userentity. - Solution:
- Verify the file path in
src/config/database.tsandormconfig.tsis correct:../database/entities/User(relative toconfig) or./src/database/entities/User(relative to root forormconfig.ts). - Ensure
User.tsexists in the specified location. - Check for typos in the import statement or file name.
- Verify the file path in
- Issue: TypeScript or runtime errors related to importing the
Testing & Verification
To ensure everything is correctly set up and working:
Start your Dockerized PostgreSQL database:
docker compose up -dVerify it’s running and healthy using
docker ps.Run database migrations:
npm run typeorm:migration:runEnsure it reports that the
InitialUserTablemigration ran successfully. If it saysNo migrations were found in the database, it means the table was already created, which is fine.Start your Fastify application in development mode:
npm run devConfirm you see “Database connection initialized successfully!” in the console.
Test the health check endpoint:
curl http://localhost:3000/healthExpected output:
{"status":"ok","database":"connected"}.Test the user creation endpoint:
curl -X POST \ -H "Content-Type: application/json" \ -d '{ "username": "testuser", "email": "test@example.com", "password": "Password123!" }' \ http://localhost:3000/api/v1/usersExpected output: A
201 Createdresponse with the new user’s details (excluding password hash).Verify data persistence: Connect to your PostgreSQL database (e.g., using
psql) and query theuserstable:docker exec -it myapp-postgres psql -U myuser -d mydatabase SELECT id, username, email FROM users;You should see the
testuserentry.
Summary & Next Steps
Congratulations! You’ve successfully integrated PostgreSQL into your Node.js Fastify application, established a robust migration system with TypeORM, and implemented your first data-persisting API endpoint. This is a monumental step, as almost every real-world application relies on persistent storage.
In this chapter, we covered:
- Setting up a local PostgreSQL database using Docker Compose.
- Configuring TypeORM as our ORM, including its
DataSourceand CLI. - Defining our first
Userentity and generating/running a migration to create theuserstable. - Integrating the database connection into our Fastify application startup and adding a health check.
- Creating a
UserServiceandUserControllerto handle user creation with password hashing and input validation.
With a solid database foundation in place, our application is now ready to manage user data. In the next chapter, Chapter 5: Authentication & Authorization: JWTs and Role-Based Access Control, we will build upon this foundation to implement secure user authentication using JSON Web Tokens (JWTs) and introduce role-based access control (RBAC) to protect our API endpoints. Get ready to secure your application!