log
Swift Code Chronicles

Fastify Best Practices #3: Database Access

Published on June 22, 2025
Updated on June 26, 2025
83 min read
TypeScript

In any modern web application, the database layer is the veritable “engine room.” Its performance, stability, and maintainability directly determine the entire application’s ceiling. In today’s world where Node.js and TypeScript reign supreme, we have a plethora of excellent database tools at our disposal, such as the veteran TypeORM, the trailblazing Prisma, and the type-safety purist, Kysely.

The choice of tool not only affects development efficiency but also has a profound impact on the application’s performance in a production environment, especially concerning troubleshooting and performance optimization. This article will conduct a deep dive into these three, and based on a standard that is often overlooked yet critically important—observability—I will explain why I ultimately recommend Kysely as the top choice for Fastify projects.

A Side-by-Side Comparison of Three Major Tools

First, let’s use a table to get an intuitive understanding of the core features and differences between TypeORM, Prisma, and Kysely.

Dimension TypeORM Prisma Kysely
Data Model Definition Code-First: Defines data models using TypeScript classes and decorators (@Entity, @Column). The code is the single source of truth. Schema-First: Defines models in a dedicated .prisma file using its own Schema Definition Language (SDL). This file is the single source of truth. No Model Management: Does not directly manage data models. It infers the database structure from the TypeScript interface you provide, thereby offering perfect type hints.
Type Safety Good but with loopholes. Basic CRUD operations are type-safe, but type inference can be lost when dealing with complex relationships, raw queries, or certain QueryBuilder method chains, often resulting in any. Ultimate. Its client is auto-generated from your .prisma model, ensuring that every query you write, including field selections and nested relations, has precise, end-to-end type safety. Ultimate. Type safety is derived directly from the query structure itself. The TypeScript type of the result is precisely inferred based on the fields you select and the tables you join, without ambiguity.
Database Migrations Built-in. Can auto-generate migration files (migration:generate) based on entity changes. However, in complex projects, the auto-generated migrations often require manual review and adjustment, and can sometimes be buggy. Built-in and extremely powerful. The prisma migrate dev command provides a silky-smooth development experience, reliably generating and applying SQL migrations based on changes to the .prisma file. This is widely recognized as one of its core strengths. Not built-in, requires official tool. Requires using the officially provided kysely-ctl tool for managing migrations. It is feature-complete but requires extra configuration.
Learning Curve Medium. Requires understanding various ORM concepts (Entities, Repositories, Data Mappers, Active Record pattern, N+1 problem, etc.). The API surface is extensive. Low. The API is very intuitive and well-designed. Prisma Studio (a GUI tool) makes data Browse and manipulation incredibly simple, significantly lowering the barrier to entry. Low (if you know SQL) / High (if you don’t). Its API maps almost one-to-one with SQL statements. If you’re familiar with SQL, it feels incredibly natural and powerful. If not, you’ll need to learn SQL first.
Query Control & Performance Medium. High-level abstractions can sometimes generate non-optimal SQL queries. Prone to the N+1 query problem, requiring developers to consciously use relations or QueryBuilder for optimization. High. The API design cleverly avoids many common ORM pitfalls. For example, .include and .select give you precise control over the query scope. The generated SQL is usually predictable and efficient. Highest. Because it is just a type-safe wrapper around SQL. You can write any complex JOIN, subquery, CTE, window function, etc. The performance ceiling and floor are entirely determined by your SQL proficiency.

One Critical Consideration: Why I Ultimately Gave Up on Prisma

Looking at the comparison above, many would lean towards Prisma. It has a low learning curve, a powerful migration tool, and ultimate type safety, making it a paragon of a “modern ORM.”

However, in a production environment, a seemingly minor design choice can lead to enormous maintenance costs. Prisma (as of version 6.9.0) has such a problem: the disconnect between logging and asynchronous context.

When building maintainable backend services, observability is a core concept. We expect every HTTP request entering the system to be assigned a unique Trace ID. Throughout the entire lifecycle of this request—from entering the controller to calling services and querying the database—all generated logs should include this ID. This allows us to accurately trace all activities of a single request amidst a sea of logs, serving as a lifeline for quickly diagnosing and resolving online issues.

In Node.js, we typically use AsyncLocalStorage to propagate such context across asynchronous call chains. However, Prisma’s logging system runs into trouble here.

// Prisma's method for subscribing to logs
const prisma = new PrismaClient({
  log: [{ emit: 'event', level: 'query' }],
});

prisma.$on('query', (e) => {
  // This callback function is triggered by Node.js's Event Emitter.
  // It runs in a separate asynchronous context.
  // Therefore, it cannot access the current HTTP request's AsyncLocalStorage.
  const traceId = getTraceIdFromContext(); // This will return undefined

  console.log(`[TraceID: ${traceId}] Query: ${e.query}`); // The TraceID will be undefined
  console.log('Params: ' + e.params);
  console.log('Duration: ' + e.duration + 'ms');
});

The $on method is based on Node.js’s EventEmitter, and its event callback is independent of the request’s asynchronous flow. This means that by the time Prisma finishes executing a SQL query and emits the query event, we have already lost the original request’s context. The inability to inject a trace ID into SQL logs is an unacceptable compromise for building a serious, observable production system.

For this reason, I am willing to forgo the many conveniences that Prisma brings.

Kysely: The Ultimate Toolkit for TypeScript and SQL Artisans

This naturally brings us to our protagonist: Kysely.

Kysely is not a full-fledged ORM like TypeORM or Prisma; it is a type-safe SQL query builder. This positioning perfectly solves all the aforementioned problems:

  1. Seamless Log Tracing: Its logging mechanism does not use EventEmitter, making it compatible with AsyncLocalStorage.
  2. Ultimate Performance and Control: It is just a thin, type-safe layer of glue between you and your SQL. The code you write is almost exactly the SQL that gets executed, with no hidden performance overhead or magic. Performance is entirely dictated by your SQL skills.
  3. True Type Safety: Its type inference capabilities are astounding. The type of the return value is dynamically calculated based on your select, join, and other operations, rather than being a fixed entity model.

Therefore, I firmly believe that for teams that pursue code quality, application performance, and long-term maintainability, Kysely is the superior choice.

Practical Guide: Integrating Kysely with Fastify

Next, let’s integrate Kysely into our project step-by-step.

1. Installation and Type Definition

First, install Kysely and the corresponding database driver (using PostgreSQL as an example):

npm i kysely pg

Then, create the type definition files for your database. This is the core of Kysely’s type magic.

// src/database/types/user.type.ts
import { ColumnType } from 'kysely';

// Define the interface for the `user_tbl` table
export interface UserTbl {
  id: ColumnType<string, string, never>;
  username: string;
  address: string;
  email: string;
  birthday: ColumnType<Date, Date | string, Date | string>;
  // This is a timestamptz in the DB, but we want it as a Date in our code
  createdAt: ColumnType<Date, Date | string | undefined, never>;
  updatedAt: ColumnType<Date | null, Date | string | null, Date>;
}

Let’s explain what ColumnType<Select, Insert, Update> does:

  • Select: The type of the column when reading from the database (e.g., createdAt is a Date).
  • Insert: The acceptable types when inserting data (e.g., birthday can be a Date object or an ISO string).
  • Update: The acceptable types when updating data (e.g., id and createdAt are typically not provided by the user during an update, hence never).
// src/database/types/index.ts
import { UserTbl } from './user.type';

// The structure of the entire database
export interface Database {
  userTbl: UserTbl;
}
// src/database/index.ts
import { Database } from './types';
import { CamelCasePlugin, Kysely, ParseJSONResultsPlugin, PostgresDialect, Pool } from 'kysely';

const dialect = new PostgresDialect({
  pool: new Pool({
    database: process.env.DB_NAME,
    host: process.env.DB_HOST,
    user: process.env.DB_USER,
    password: process.env.DB_PASSWORD,
    port: Number(process.env.DB_PORT),
    max: 10,
  }),
});

// Create the Kysely instance
export const db = new Kysely<Database>({
  dialect,
  plugins: [
    new CamelCasePlugin(), // Automatically converts snake_case column names to camelCase
    new ParseJSONResultsPlugin(), // Automatically parses JSON strings
  ],
});

An example query:

const result = await db.selectFrom('userTbl').selectAll().where('id', '=', id).executeTakeFirstOrThrow();

2. Database Migrations with kysely-ctl

To manage database schema changes systematically, we need a migration tool.

npm i -D kysely-ctl dotenv-cli

Add the following commands to the scripts in package.json:

"scripts": {
  "db:init": "kysely-ctl init",
  "db:migrate:make": "kysely-ctl migrate make",
  "db:migrate:latest": "dotenv -e .env -- kysely-ctl migrate latest",
  "db:migrate:down": "dotenv -e .env -- kysely-ctl migrate down",
  "db:seed:make": "kysely-ctl seed make",
  "db:seed:run": "dotenv -e .env -- kysely-ctl seed run"
}

Note: I’m using dotenv-cli here to load environment variables and have simplified the command names to align with common conventions.

A. Initialization

Run npm run db:init. This will generate a .config/kysely.config.ts file. Let’s configure it:

// kysely.config.ts
import { PostgresDialect } from 'kysely';
import { defineConfig } from 'kysely-ctl';
import { Pool } from 'pg';

export default defineConfig({
  dialect: new PostgresDialect({
    pool: new Pool({
      database: process.env.DB_NAME,
      host: process.env.DB_HOST,
      user: process.env.DB_USER,
      password: process.env.DB_PASSWORD,
      port: Number(process.env.DB_PORT),
      max: 10,
    }),
  }),
  migrations: {
    migrationFolder: '.config/migrations',
  },
  seeds: {
    seedFolder: '.config/seeds',
  },
});

B. Creating a Migration File

Run npm run migrate:make -x=ts init_tbl. This will create a timestamped migration file in the .config/migrations directory.

Tip: Kysely<any> is necessary because a migration file is a snapshot of the database at a specific point in time. It should not depend on the Database interface, which may change in the future.

// <timestamp>_init_tables.ts
import { Kysely, sql } from 'kysely';

export async function up(db: Kysely<any>): Promise<void> {
  await db.schema
    .createTable('user_tbl') // It's recommended to use snake_case for table and column names
    .addColumn('id', 'uuid', (col) => col.primaryKey())
    .addColumn('username', 'varchar(100)', (col) => col.notNull())
    .addColumn('address', 'varchar(255)', (col) => col.notNull())
    .addColumn('email', 'varchar(100)', (col) => col.notNull().unique())
    .addColumn('birthday', 'timestamptz', (col) => col.notNull())
    .addColumn('created_at', 'timestamptz', (col) => col.defaultTo(sql`now()`).notNull())
    .addColumn('updated_at', 'timestamptz')
    .execute();
}

export async function down(db: Kysely<any>): Promise<void> {
  await db.schema.dropTable('user_tbl').execute();
}

C. Running the Migration

Run npm run db:migrate:latest, and Kysely will create this table in your database.

3. Seeding Data with kysely-ctl

A. Creating a Seed File

Run npm run seed:make to generate a seed file.

B. Writing the Seed Logic

// <timestamp>_init_user.ts
import type { Database } from '@/database/types';
import type { Kysely } from 'kysely';

// Import your DB types

export async function seed(db: Kysely<Database>): Promise<void> {
  const user1Id = '123e4567-e89b-12d3-a456-426614174000';

  await db
    .insertInto('userTbl')
    .values({
      id: user1Id,
      username: 'Shukang',
      email: 'shukang@example.com',
      address: '123 Main St',
      birthday: new Date('1990-01-01T00:00:00Z'),
    })
    .execute();
}

C. Running the Seed

Run npm run db:seed:run, and the initial data will be successfully inserted into the database.

CRUD Examples

For more complete Create, Read, Update, and Delete (CRUD) operations, I have organized them in an open-source project. You are welcome to reference and discuss it:

fastify-best-practice on GitHub

Conclusion

Choosing a database tool is a strategic technical decision. Prisma offers an unparalleled developer experience, making it ideal for quickly launching new projects or internal systems. However, as a project matures and faces the complexities of a production environment, observability and low-level control become paramount.

Kysely is built for precisely these scenarios. It trusts and empowers developers, allowing you to write the most efficient and controllable SQL queries while enjoying complete type safety. It requires you to know SQL better, but that in itself is a valuable investment. By embracing Kysely, you are not just choosing a tool; you are choosing a more rigorous, efficient, and masterful backend development philosophy.

About

A personal blog sharing technical insights, experiences and thoughts

Quick Links

Contact

  • Email: hushukang_blog@proton.me
  • GitHub

© 2025 Swift Code Chronicles. All rights reserved