Database Migration Strategies for Node.js Applications
Comprehensive guide to database migration strategies for Node.js applications with PostgreSQL, covering tools, patterns, and zero-downtime techniques.
Database Migration Strategies for Node.js Applications
Overview
Database migrations are versioned, repeatable scripts that evolve your database schema alongside your application code. If you have ever SSHed into a production server and run an ALTER TABLE by hand, you already know why migrations exist -- they replace that terrifying ritual with something auditable, reversible, and automatable. Every Node.js application that touches a relational database should have a migration strategy from day one, and this guide walks through the tools, patterns, and operational discipline required to get it right with PostgreSQL.
Prerequisites
- Node.js v16+ installed
- PostgreSQL 13+ running locally or accessible remotely
- Basic familiarity with SQL DDL (
CREATE TABLE,ALTER TABLE,DROP) - A Node.js project with
npminitialized - Understanding of environment variables for database connection strings
Why Migrations Matter
Manual schema changes are a liability. I have watched a senior engineer drop a column in production because they "remembered" it was unused. It was not. The application crashed, and the column was gone. No rollback script, no audit trail, no way to know what the schema looked like five minutes ago.
Migrations solve this by treating your database schema as code:
- Version controlled -- every change is a file in your repository
- Ordered -- migrations run in sequence, so dependencies are respected
- Reversible -- each migration has a rollback counterpart
- Auditable -- a migrations table tracks exactly which changes have been applied
- Reproducible -- spin up a fresh database in seconds by running all migrations
Without migrations, your production database becomes a snowflake -- a unique artifact that nobody can reconstruct. With migrations, your schema is deterministic.
Migration Tools Comparison
The Node.js ecosystem has several migration tools. Here is an honest assessment of each.
node-pg-migrate
npm install node-pg-migrate pg
This is my go-to for PostgreSQL projects. It is purpose-built for Postgres, generates timestamped migration files, and has excellent support for PostgreSQL-specific features like partial indexes, triggers, and extensions.
// migrations/1706900000000_create-users-table.js
exports.up = function(pgm) {
pgm.createTable("users", {
id: "id",
email: { type: "varchar(255)", notNull: true, unique: true },
password_hash: { type: "varchar(255)", notNull: true },
display_name: { type: "varchar(100)" },
created_at: {
type: "timestamp",
notNull: true,
default: pgm.func("current_timestamp")
},
updated_at: {
type: "timestamp",
notNull: true,
default: pgm.func("current_timestamp")
}
});
pgm.createIndex("users", "email");
};
exports.down = function(pgm) {
pgm.dropTable("users");
};
Strengths: Lightweight, Postgres-native, programmatic API, no ORM dependency. Weaknesses: PostgreSQL only, smaller community than Knex.
Knex Migrations
npm install knex pg
Knex is a query builder with built-in migration support. It works with multiple databases, which makes it attractive for teams that might switch databases later (they usually do not).
// migrations/20240203120000_create_users.js
exports.up = function(knex) {
return knex.schema.createTable("users", function(table) {
table.increments("id").primary();
table.string("email", 255).notNullable().unique();
table.string("password_hash", 255).notNullable();
table.string("display_name", 100);
table.timestamps(true, true);
});
};
exports.down = function(knex) {
return knex.schema.dropTable("users");
};
Strengths: Multi-database support, large community, doubles as query builder. Weaknesses: Abstracts away PostgreSQL-specific features, heavier dependency.
Sequelize Migrations
npm install sequelize sequelize-cli pg pg-hstore
Sequelize bundles migrations with a full ORM. If you are already using Sequelize for your models, its migrations integrate naturally. If you are not using Sequelize, do not adopt it just for migrations.
// migrations/20240203120000-create-users.js
module.exports = {
up: function(queryInterface, Sequelize) {
return queryInterface.createTable("users", {
id: { type: Sequelize.INTEGER, primaryKey: true, autoIncrement: true },
email: { type: Sequelize.STRING(255), allowNull: false, unique: true },
password_hash: { type: Sequelize.STRING(255), allowNull: false },
created_at: { type: Sequelize.DATE, defaultValue: Sequelize.fn("NOW") },
updated_at: { type: Sequelize.DATE, defaultValue: Sequelize.fn("NOW") }
});
},
down: function(queryInterface) {
return queryInterface.dropTable("users");
}
};
Strengths: Tight ORM integration, model-aware migrations. Weaknesses: Heavy, couples your migration tool to your ORM, verbose.
Prisma Migrate
npm install prisma @prisma/client
Prisma takes a declarative approach. You define your schema in a .prisma file and Prisma generates the migration SQL for you.
model User {
id Int @id @default(autoincrement())
email String @unique @db.VarChar(255)
passwordHash String @map("password_hash") @db.VarChar(255)
displayName String? @map("display_name") @db.VarChar(100)
createdAt DateTime @default(now()) @map("created_at")
updatedAt DateTime @updatedAt @map("updated_at")
@@map("users")
}
Strengths: Declarative schema, auto-generated SQL, excellent developer experience. Weaknesses: Lock-in to Prisma ecosystem, less control over generated SQL, ESM-leaning.
My Recommendation
For PostgreSQL projects without an ORM, use node-pg-migrate. It gives you full control, stays out of your way, and produces readable migration files. If you are already committed to Knex as your query builder, use Knex migrations. Do not adopt Sequelize or Prisma just for migrations.
Writing Forward and Rollback Migrations
Every migration needs two functions: up (apply the change) and down (reverse it). The down function is not optional -- it is insurance.
exports.up = function(pgm) {
pgm.addColumns("users", {
bio: { type: "text" },
avatar_url: { type: "varchar(500)" },
location: { type: "varchar(100)" }
});
};
exports.down = function(pgm) {
pgm.dropColumns("users", ["bio", "avatar_url", "location"]);
};
Some migrations are not cleanly reversible. Document this in the down function.
Rules for Writing Migrations
- One concern per migration. Do not create a table and alter another in the same file.
- Always write
down. Even if it is imperfect, it is better than nothing. - Never modify a migration that has been applied. Create a new one instead.
- Use descriptive names.
add-users-bio-columnbeatsupdate-users. - Test both directions. Run
up, thendown, thenupagain.
Handling Schema Changes in Production Without Downtime
The fundamental challenge: your application is running while you change the database underneath it.
The Golden Rule
Your database schema must be backward-compatible with the currently running application code at all times.
- Adding a nullable column is safe
- Adding a column with a default is safe
- Adding a
NOT NULLcolumn without a default is not safe - Dropping a column is not safe until all code stops referencing it
- Renaming a column is not safe (it is effectively a drop + add)
Zero-Downtime: Expand-Contract
Split dangerous migrations into safe steps across releases.
Phase 1: Expand -- Add new column, copy data, create sync trigger. Phase 2: Contract -- Drop old column after all code migrated.
// Phase 1: Expand
exports.up = function(pgm) {
pgm.addColumn("users", { display_name: { type: "varchar(100)" } });
pgm.sql("UPDATE users SET display_name = name");
pgm.sql(
"CREATE OR REPLACE FUNCTION sync_name() RETURNS TRIGGER AS 1490 BEGIN " +
"IF NEW.name IS DISTINCT FROM OLD.name THEN NEW.display_name = NEW.name; END IF; " +
"IF NEW.display_name IS DISTINCT FROM OLD.display_name THEN NEW.name = NEW.display_name; END IF; " +
"RETURN NEW; END; 1490 LANGUAGE plpgsql"
);
pgm.sql("CREATE TRIGGER sync_trigger BEFORE UPDATE ON users FOR EACH ROW EXECUTE FUNCTION sync_name()");
};
// Phase 2: Contract
exports.up = function(pgm) {
pgm.sql("DROP TRIGGER IF EXISTS sync_trigger ON users");
pgm.sql("DROP FUNCTION IF EXISTS sync_name()");
pgm.dropColumn("users", "name");
};
Data Migrations vs Schema Migrations
Schema migrations change structure (tables, columns, indexes). Data migrations change content. Keep them separate.
// Schema migration
exports.up = function(pgm) {
pgm.addColumn("users", { role: { type: "varchar(20)", default: "reader" } });
};
exports.down = function(pgm) { pgm.dropColumn("users", "role"); };
// Data migration (separate file)
exports.up = function(pgm) {
pgm.sql("UPDATE users SET role = 'admin' WHERE email = '[email protected]'");
};
For large tables, batch updates to avoid long-running locks using PL/pgSQL loops with pg_sleep between batches.
// data-migration-batched.js
var pg = require("pg");
var pool = new pg.Pool({
connectionString: process.env.DATABASE_URL
});
function batchUpdate(batchSize, delayMs) {
var client;
var totalUpdated = 0;
return pool.connect()
.then(function(c) {
client = c;
return processNextBatch();
});
function processNextBatch() {
return client.query(
"UPDATE users SET display_name = INITCAP(name) " +
"WHERE display_name IS NULL AND id IN (" +
"SELECT id FROM users WHERE display_name IS NULL LIMIT $1)",
[batchSize]
).then(function(result) {
totalUpdated += result.rowCount;
console.log("Updated " + totalUpdated + " rows so far");
if (result.rowCount === 0) {
client.release();
console.log("Done. Total: " + totalUpdated);
return totalUpdated;
}
return new Promise(function(resolve) {
setTimeout(function() { resolve(processNextBatch()); }, delayMs);
});
});
}
}
batchUpdate(1000, 100).catch(function(err) {
console.error("Migration failed:", err);
process.exit(1);
});
This script processes 1000 rows at a time with a 100ms pause between batches. On a table with 10 million rows, this takes longer than a single UPDATE, but it does not lock the table for minutes at a time. Your application keeps running.
CI/CD Pipeline Integration
Migrations should run automatically as part of your deployment pipeline. Never rely on a human to remember to run migrations.
GitHub Actions Example
# .github/workflows/deploy.yml
name: Deploy with Migrations
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
env:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
ports:
- 5432:5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- name: Run migrations
env:
DATABASE_URL: postgres://testuser:testpass@localhost:5432/testdb
run: npx node-pg-migrate up
- name: Run tests
env:
DATABASE_URL: postgres://testuser:testpass@localhost:5432/testdb
run: npm test
deploy:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci --production
- name: Run production migrations
env:
DATABASE_URL: ${{ secrets.DATABASE_URL }}
run: npx node-pg-migrate up
- name: Deploy application
run: ./scripts/deploy.sh
Pre-Deploy Migration Script
I wrap migration execution in a script that adds safety checks.
// scripts/migrate.js
var pg = require("pg");
var runner = require("node-pg-migrate").default;
var path = require("path");
var databaseUrl = process.env.DATABASE_URL;
if (!databaseUrl) {
console.error("DATABASE_URL is required");
process.exit(1);
}
var direction = process.argv[2] || "up";
var count = parseInt(process.argv[3], 10) || undefined;
console.log("Running migrations: direction=" + direction + ", count=" + (count || "all"));
console.log("Database: " + databaseUrl.replace(/\/\/.*@/, "//***@"));
runner({
databaseUrl: databaseUrl,
dir: path.resolve(__dirname, "..", "migrations"),
direction: direction,
count: count,
migrationsTable: "pgmigrations",
log: function(msg) { console.log(" " + msg); }
}).then(function(migrations) {
console.log("Applied " + migrations.length + " migration(s)");
migrations.forEach(function(m) {
console.log(" - " + m.name);
});
process.exit(0);
}).catch(function(err) {
console.error("Migration failed:", err.message);
process.exit(1);
});
Migration Order in Deployment
The deployment sequence matters:
- Run migrations (schema changes applied)
- Deploy new application code (code that uses new schema)
- Verify application health
- If health check fails, roll back application code AND migrations
This order works because migrations should always be backward-compatible (see the Golden Rule above). The old application code continues to work after the migration runs, giving you a window to deploy the new code.
Migration Testing Strategies
Migrations are code. Test them.
Round-Trip Testing
The simplest and most important test: run up, then down, then up again. If this fails, your migration has a bug.
// test/migrations.test.js
var pg = require("pg");
var runner = require("node-pg-migrate").default;
var path = require("path");
var TEST_DB = "postgres://localhost:5432/migration_test";
function runMigrations(direction, count) {
return runner({
databaseUrl: TEST_DB,
dir: path.resolve(__dirname, "..", "migrations"),
direction: direction,
count: count,
migrationsTable: "pgmigrations",
log: function() {}
});
}
describe("migrations", function() {
it("should round-trip cleanly", function() {
return runMigrations("up")
.then(function() { return runMigrations("down"); })
.then(function() { return runMigrations("up"); })
.then(function(result) {
console.log("Round-trip succeeded: " + result.length + " migrations");
});
});
});
Schema Snapshot Testing
Capture the schema after migrations and compare it to a known-good baseline. This catches drift.
// test/schema-snapshot.test.js
var pg = require("pg");
var fs = require("fs");
var path = require("path");
var pool = new pg.Pool({
connectionString: "postgres://localhost:5432/migration_test"
});
function getSchema() {
return pool.query(
"SELECT table_name, column_name, data_type, is_nullable, column_default " +
"FROM information_schema.columns " +
"WHERE table_schema = 'public' " +
"ORDER BY table_name, ordinal_position"
).then(function(result) {
return result.rows;
});
}
describe("schema snapshot", function() {
it("should match baseline", function() {
var snapshotPath = path.resolve(__dirname, "schema-snapshot.json");
return getSchema().then(function(schema) {
if (!fs.existsSync(snapshotPath)) {
fs.writeFileSync(snapshotPath, JSON.stringify(schema, null, 2));
console.log("Snapshot created. Run tests again to verify.");
return;
}
var baseline = JSON.parse(fs.readFileSync(snapshotPath, "utf8"));
if (JSON.stringify(schema) !== JSON.stringify(baseline)) {
throw new Error("Schema drift detected. Update snapshot if intentional.");
}
});
});
});
Testing Against Production Clone
The most thorough approach: clone your production database, run migrations against the clone, and verify the result. This catches issues that only appear with real data -- constraint violations, data type mismatches, unexpected nulls.
# Clone production to a test database
pg_dump --format=custom --no-owner production_db > prod_dump.sql
createdb migration_staging
pg_restore --no-owner -d migration_staging prod_dump.sql
# Run migrations against the clone
DATABASE_URL=postgres://localhost/migration_staging npx node-pg-migrate up
# Verify
psql migration_staging -c "SELECT count(*) FROM users;"
# Clean up
dropdb migration_staging
I run this before every production deployment for migrations that touch existing data.
Migration Locking and Concurrent Deployment Safety
When multiple deployment instances try to run migrations simultaneously, you get race conditions. Two processes might both see an unapplied migration and try to run it at the same time.
Advisory Locks
node-pg-migrate uses PostgreSQL advisory locks by default. This prevents concurrent migration runs.
-- What node-pg-migrate does internally:
SELECT pg_advisory_lock(7241865325823964);
-- Run migrations...
SELECT pg_advisory_unlock(7241865325823964);
Advisory locks are session-level. If your migration process crashes, the lock is automatically released when the connection closes. This is safer than table-level locks, which can persist.
Custom Locking for Knex
Knex does not use advisory locks by default. You can add them yourself.
// scripts/migrate-with-lock.js
var knex = require("knex")(require("../knexfile"));
var LOCK_ID = 123456789;
function acquireLock() {
return knex.raw("SELECT pg_try_advisory_lock(?)", [LOCK_ID])
.then(function(result) {
return result.rows[0].pg_try_advisory_lock;
});
}
function releaseLock() {
return knex.raw("SELECT pg_advisory_unlock(?)", [LOCK_ID]);
}
function runWithLock() {
return acquireLock().then(function(acquired) {
if (!acquired) {
console.log("Another migration is running. Waiting...");
return new Promise(function(resolve) {
setTimeout(function() { resolve(runWithLock()); }, 2000);
});
}
console.log("Lock acquired. Running migrations...");
return knex.migrate.latest()
.then(function(result) {
console.log("Applied:", result);
return releaseLock();
})
.catch(function(err) {
return releaseLock().then(function() { throw err; });
});
});
}
runWithLock()
.then(function() { process.exit(0); })
.catch(function(err) {
console.error("Migration failed:", err);
process.exit(1);
});
Avoiding Table-Level Lock Contention
Some DDL statements acquire ACCESS EXCLUSIVE locks that block all reads and writes. On busy tables, this can cause outages.
The worst offenders:
ALTER TABLE ... ADD COLUMN ... DEFAULT ...(before PostgreSQL 11)CREATE INDEX(blocks writes)ALTER TABLE ... ALTER COLUMN TYPE ...(rewrites table)
Use CREATE INDEX CONCURRENTLY to avoid blocking writes during index creation.
exports.up = function(pgm) {
pgm.createIndex("orders", "customer_id", {
method: "btree",
concurrently: true
});
};
Note that CREATE INDEX CONCURRENTLY cannot run inside a transaction. node-pg-migrate handles this automatically by running the migration outside a transaction when it detects concurrent operations.
Complete Working Example
Here is a complete migration setup for a Node.js project using node-pg-migrate. This example implements a user management system with a safe column rename using the expand-contract pattern.
Project Setup
mkdir migration-demo && cd migration-demo
npm init -y
npm install node-pg-migrate pg
mkdir migrations
package.json Scripts
{
"scripts": {
"migrate:create": "node-pg-migrate create",
"migrate:up": "node-pg-migrate up",
"migrate:down": "node-pg-migrate down",
"migrate:status": "node scripts/migration-status.js",
"start": "node app.js"
}
}
Migration Status Script
// scripts/migration-status.js
var pg = require("pg");
var fs = require("fs");
var path = require("path");
var pool = new pg.Pool({
connectionString: process.env.DATABASE_URL
});
function getStatus() {
var migrationsDir = path.resolve(__dirname, "..", "migrations");
var files = fs.readdirSync(migrationsDir).filter(function(f) {
return f.endsWith(".js");
}).sort();
return pool.query("SELECT name FROM pgmigrations ORDER BY run_on")
.then(function(result) {
var applied = result.rows.map(function(r) { return r.name; });
console.log("\nMigration Status:");
console.log("=================\n");
files.forEach(function(file) {
var name = file.replace(".js", "");
var status = applied.indexOf(name) !== -1 ? "APPLIED" : "PENDING";
var marker = status === "APPLIED" ? "[x]" : "[ ]";
console.log(" " + marker + " " + name);
});
console.log("\n Applied: " + applied.length + " / " + files.length);
})
.catch(function(err) {
if (err.message.indexOf("does not exist") !== -1) {
console.log("No migrations table found. Run migrate:up first.");
} else {
throw err;
}
})
.then(function() { pool.end(); });
}
getStatus();
Migration 1: Create Users Table
// migrations/1706900000000_create-users.js
exports.up = function(pgm) {
pgm.createTable("users", {
id: "id",
email: { type: "varchar(255)", notNull: true, unique: true },
name: { type: "varchar(100)", notNull: true },
password_hash: { type: "varchar(255)", notNull: true },
created_at: {
type: "timestamp",
notNull: true,
default: pgm.func("current_timestamp")
}
});
pgm.createIndex("users", "email");
};
exports.down = function(pgm) {
pgm.dropTable("users");
};
Migration 2: Add Sessions Table
// migrations/1706900100000_create-sessions.js
exports.up = function(pgm) {
pgm.createTable("sessions", {
id: "id",
user_id: {
type: "integer",
notNull: true,
references: '"users"',
onDelete: "CASCADE"
},
token: { type: "varchar(255)", notNull: true, unique: true },
expires_at: { type: "timestamp", notNull: true },
created_at: {
type: "timestamp",
notNull: true,
default: pgm.func("current_timestamp")
}
});
pgm.createIndex("sessions", "token");
pgm.createIndex("sessions", "user_id");
pgm.createIndex("sessions", "expires_at");
};
exports.down = function(pgm) {
pgm.dropTable("sessions");
};
Migration 3: Expand -- Rename name to display_name
// migrations/1706900200000_expand-rename-name.js
exports.up = function(pgm) {
pgm.addColumn("users", {
display_name: { type: "varchar(100)" }
});
pgm.sql("UPDATE users SET display_name = name");
pgm.alterColumn("users", "display_name", { notNull: true });
pgm.sql(
"CREATE OR REPLACE FUNCTION sync_user_name() RETURNS TRIGGER AS $$ BEGIN " +
"IF TG_OP = 'INSERT' THEN " +
" IF NEW.display_name IS NULL THEN NEW.display_name = NEW.name; END IF; " +
" IF NEW.name IS NULL THEN NEW.name = NEW.display_name; END IF; " +
"ELSIF TG_OP = 'UPDATE' THEN " +
" IF NEW.name IS DISTINCT FOLD.name THEN NEW.display_name = NEW.name; END IF; " +
" IF NEW.display_name IS DISTINCT FROM OLD.display_name THEN NEW.name = NEW.display_name; END IF; " +
"END IF; " +
"RETURN NEW; END; $$ LANGUAGE plpgsql"
);
pgm.sql(
"CREATE TRIGGER sync_user_name_trigger BEFORE INSERT OR UPDATE ON users " +
"FOR EACH ROW EXECUTE FUNCTION sync_user_name()"
);
};
exports.down = function(pgm) {
pgm.sql("DROP TRIGGER IF EXISTS sync_user_name_trigger ON users");
pgm.sql("DROP FUNCTION IF EXISTS sync_user_name()");
pgm.dropColumn("users", "display_name");
};
Migration 4: Contract -- Drop Old name Column
// migrations/1706900300000_contract-drop-name.js
exports.up = function(pgm) {
pgm.sql("DROP TRIGGER IF EXISTS sync_user_name_trigger ON users");
pgm.sql("DROP FUNCTION IF EXISTS sync_user_name()");
pgm.dropColumn("users", "name");
};
exports.down = function(pgm) {
pgm.addColumn("users", {
name: { type: "varchar(100)" }
});
pgm.sql("UPDATE users SET name = display_name");
pgm.alterColumn("users", "name", { notNull: true });
};
Application Integration
// app.js
var express = require("express");
var pg = require("pg");
var runner = require("node-pg-migrate").default;
var path = require("path");
var app = express();
var pool = new pg.Pool({
connectionString: process.env.DATABASE_URL
});
function startApp() {
return runner({
databaseUrl: process.env.DATABASE_URL,
dir: path.resolve(__dirname, "migrations"),
direction: "up",
migrationsTable: "pgmigrations",
log: function(msg) { console.log("[migration] " + msg); }
}).then(function(migrations) {
if (migrations.length > 0) {
console.log("Applied " + migrations.length + " migration(s)");
} else {
console.log("Database is up to date");
}
return startServer();
}).catch(function(err) {
console.error("Migration failed. Server not started.", err);
process.exit(1);
});
}
function startServer() {
app.get("/users", function(req, res) {
pool.query("SELECT id, email, display_name, created_at FROM users ORDER BY created_at DESC")
.then(function(result) { res.json(result.rows); })
.catch(function(err) { res.status(500).json({ error: err.message }); });
});
app.get("/health", function(req, res) {
pool.query("SELECT 1")
.then(function() { res.json({ status: "healthy", database: "connected" }); })
.catch(function() { res.status(503).json({ status: "unhealthy", database: "disconnected" }); });
});
var port = process.env.PORT || 3000;
app.listen(port, function() {
console.log("Server running on port " + port);
});
}
startApp();
Running the Example
# Create the database
createdb migration_demo
# Set the connection string
export DATABASE_URL=postgres://localhost:5432/migration_demo
# Run all migrations
npm run migrate:up
# Output:
# > node-pg-migrate up
# > Migrating files:
# > 1706900000000_create-users
# > 1706900100000_create-sessions
# > 1706900200000_expand-rename-name
# > 1706900300000_contract-drop-name
# > Migrations complete
# Check status
npm run migrate:status
# Output:
# Migration Status:
# =================
#
# [x] 1706900000000_create-users
# [x] 1706900100000_create-sessions
# [x] 1706900200000_expand-rename-name
# [x] 1706900300000_contract-drop-name
#
# Applied: 4 / 4
# Roll back the last migration
npm run migrate:down
# Start the application
npm start
Common Issues and Troubleshooting
1. Migration Lock Not Released
Error:
Error: Unable to acquire advisory lock for migrations
Cause: A previous migration process crashed without releasing its advisory lock, or another migration is still running.
Fix: Check for active locks and terminate the holding connection if needed.
-- Find the lock holder
SELECT pid, state, query, query_start
FROM pg_stat_activity
WHERE pid IN (
SELECT pid FROM pg_locks
WHERE locktype = 'advisory'
);
-- Terminate the stale connection (last resort)
SELECT pg_terminate_backend(12345);
2. Migration Timeout on Large Tables
Error:
Error: canceling statement due to statement timeout
Cause: ALTER TABLE on a large table takes too long and hits the statement timeout.
Fix: Increase the timeout for the migration connection, or restructure the migration to avoid table-level locks.
exports.up = function(pgm) {
// Temporarily increase timeout for this migration
pgm.sql("SET statement_timeout = '300s'");
pgm.addColumn("large_table", {
new_column: { type: "varchar(255)" }
});
pgm.sql("SET statement_timeout = DEFAULT");
};
3. Column Already Exists
Error:
error: column "display_name" of relation "users" already exists
Cause: Someone ran part of a migration manually, or a migration was partially applied before crashing.
Fix: Make migrations idempotent where possible.
exports.up = function(pgm) {
pgm.sql(
"ALTER TABLE users ADD COLUMN IF NOT EXISTS display_name varchar(100)"
);
};
Alternatively, check the migration table to see which migrations have actually been recorded as applied.
SELECT * FROM pgmigrations ORDER BY run_on;
4. Foreign Key Violation During Migration
Error:
error: insert or update on table "sessions" violates foreign key constraint "sessions_user_id_fkey"
Cause: Trying to insert data or add a foreign key constraint when referencing data does not exist.
Fix: Clean up orphaned data before adding the constraint, or use NOT VALID to skip validation of existing rows.
exports.up = function(pgm) {
// Remove orphans first
pgm.sql(
"DELETE FROM sessions WHERE user_id NOT IN (SELECT id FROM users)"
);
// Then add the constraint
pgm.addConstraint("sessions", "sessions_user_id_fkey", {
foreignKeys: {
columns: "user_id",
references: "users(id)",
onDelete: "CASCADE"
}
});
};
5. Enum Type Conflict
Error:
error: type "user_role" already exists
Cause: PostgreSQL enum types live in a separate namespace. If you create them in migrations, dropping the table does not drop the enum.
Fix: Use IF NOT EXISTS for enum creation and CASCADE for dropping.
exports.up = function(pgm) {
pgm.sql(
"DO $$ BEGIN " +
"CREATE TYPE user_role AS ENUM ('admin', 'editor', 'reader'); " +
"EXCEPTION WHEN duplicate_object THEN NULL; " +
"END $$"
);
pgm.addColumn("users", {
role: { type: "user_role", default: "reader" }
});
};
exports.down = function(pgm) {
pgm.dropColumn("users", "role");
pgm.sql("DROP TYPE IF EXISTS user_role CASCADE");
};
Best Practices
Run migrations before deploying code, not after. Your new schema must be backward-compatible with the running code. Deploy migrations first, then deploy the application. If the migration fails, the old code is still running against the old schema -- no harm done.
One migration file per concern. Do not combine table creation with data seeding. Do not alter two unrelated tables in the same file. Small, focused migrations are easier to debug, review, and roll back.
Never edit a migration that has been applied. If you made a mistake, create a new migration that fixes it. Editing applied migrations breaks the contract between your migration history and your database state.
Use transactions for schema migrations. node-pg-migrate wraps each migration in a transaction by default. This means a migration either fully succeeds or fully rolls back. If your migration tool does not do this automatically, wrap it yourself.
Set a statement timeout. A migration that accidentally locks a table for 30 minutes will bring down your application. Set
statement_timeoutto a reasonable value (30-60 seconds for most migrations) and increase it explicitly for migrations that need more time.Test migrations against a production-like database. A migration that works on an empty test database may fail on production data. Clone your production database periodically and run migrations against it before deploying.
Log every migration run. Include the migration name, direction, duration, and outcome. When something goes wrong at 2 AM, these logs are your first line of investigation.
Keep migration files in the same repository as application code. The migration and the code that depends on it should be reviewed, tested, and deployed together. Do not store migrations in a separate repository.
Plan for rollback. Every deployment plan should include a rollback plan. Know which migrations need to be reversed, in what order, and what data might be lost. If a migration cannot be cleanly reversed (dropping a column, for instance), document what manual steps are needed.
References
- node-pg-migrate Documentation -- Official docs for node-pg-migrate
- Knex.js Migrations Guide -- Knex migration API reference
- PostgreSQL ALTER TABLE -- PostgreSQL DDL reference
- PostgreSQL Advisory Locks -- Advisory lock documentation
- Prisma Migrate -- Prisma migration documentation
- Zero-Downtime Deployments with PostgreSQL -- Braintree's guide to safe PostgreSQL operations
- Expand and Contract Pattern -- Martin Fowler on parallel change