DigitalOcean Managed Databases: Setup and Optimization
Complete guide to provisioning, configuring, and optimizing DigitalOcean managed databases for Node.js applications, covering PostgreSQL, MySQL, Redis, connection pooling, and performance tuning.
DigitalOcean Managed Databases: Setup and Optimization
DigitalOcean Managed Databases take the operational burden of database administration off your plate -- automated backups, failover, patching, and scaling are handled for you so you can focus on building your application. Whether you are running a side project or a production SaaS platform, managed databases eliminate the 3 AM pager alerts that come with self-hosted instances. This guide covers everything from initial provisioning to production-grade optimization for Node.js applications.
Prerequisites
Before diving in, make sure you have the following:
- A DigitalOcean account with billing enabled
doctlCLI installed and authenticated (doctl auth init)- Node.js v18+ installed locally
- Basic familiarity with SQL and at least one database engine
- A DigitalOcean project or Droplet/App Platform deployment to connect from
Install doctl if you have not already:
# macOS
brew install doctl
# Windows (via scoop)
scoop install doctl
# Linux (snap)
sudo snap install doctl
# Authenticate
doctl auth init
# Paste your API token when prompted
Verify your setup:
doctl account get
Email Droplet Limit Email Verified UUID Status
[email protected] 25 true a1b2c3d4-e5f6-7890-abcd-ef1234567890 active
Choosing the Right Database Engine
DigitalOcean offers four managed database engines. Picking the right one is the most consequential decision you will make, because migrating between engines later is painful.
PostgreSQL
PostgreSQL is the default choice for most applications and the one I reach for first. It handles relational data, JSON documents, full-text search, geospatial queries, and time-series data competently. If you are unsure which database to use, use PostgreSQL.
Best for: General-purpose applications, complex queries, data integrity requirements, applications that need JSONB for semi-structured data alongside relational tables.
DigitalOcean specifics: Versions 13 through 16 are available. Connection pooling via PgBouncer is built in. Logical replication is supported for read replicas.
MySQL
MySQL is the workhorse of the LAMP stack and remains a solid choice if your team already has deep MySQL expertise or you are migrating an existing MySQL application. DigitalOcean offers MySQL 8.x.
Best for: WordPress backends, legacy application migrations, teams with strong MySQL operational knowledge.
Caveat: If you are starting a greenfield project, PostgreSQL gives you more flexibility. MySQL's JSON support and window functions have improved, but PostgreSQL is still ahead.
Redis
Redis is not a replacement for a relational database -- it is a complement. Use it for caching, session storage, rate limiting, pub/sub messaging, and job queues. DigitalOcean manages Redis clusters with automatic failover between primary and standby nodes.
Best for: Session stores, caching layers, real-time leaderboards, rate limiting, message brokers.
Important limitation: DigitalOcean managed Redis enforces maxmemory-policy allkeys-lru by default. If you need persistent Redis with RDB/AOF durability, verify the eviction policy matches your use case.
MongoDB
MongoDB is available as a managed offering if you genuinely need a document database. I would caution against choosing it simply because "we don't want to write schemas." In practice, you always have a schema -- the question is whether it is enforced by the database or scattered across your application code.
Best for: Content management systems with highly variable document structures, event logging, IoT data ingestion where schema flexibility is genuinely needed.
Provisioning via doctl and API
Using doctl
Provision a PostgreSQL cluster from the command line:
# List available database engines and versions
doctl databases options engines
# List available regions
doctl databases options regions
# List available sizes
doctl databases options slugs --engine pg
# Create a PostgreSQL 16 cluster
doctl databases create my-app-db \
--engine pg \
--version 16 \
--size db-s-1vcpu-1gb \
--region nyc1 \
--num-nodes 1
ID Name Engine Version Status Size Region Num Nodes
a1b2c3d4-e5f6-7890-abcd-ef1234567890 my-app-db pg 16 creating db-s-1vcpu-1gb nyc1 1
Provisioning typically takes 3-5 minutes. Check status:
doctl databases get a1b2c3d4-e5f6-7890-abcd-ef1234567890
Using the API Directly
For infrastructure-as-code workflows, the DigitalOcean API is straightforward:
curl -X POST "https://api.digitalocean.com/v2/databases" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "my-app-db",
"engine": "pg",
"version": "16",
"size": "db-s-1vcpu-1gb",
"region": "nyc1",
"num_nodes": 1,
"tags": ["production", "my-app"]
}'
Retrieving Connection Details
Once the cluster is online, get the connection info:
doctl databases connection a1b2c3d4-e5f6-7890-abcd-ef1234567890
URI postgresql://doadmin:[email protected]:25060/defaultdb?sslmode=require
Host my-app-db-do-user-12345-0.b.db.ondigitalocean.com
Port 25060
User doadmin
Password password123
Database defaultdb
SSL Mode require
Notice the port is 25060, not the standard 5432. This catches people off guard. DigitalOcean uses non-standard ports for managed databases.
Connection Pooling Configuration
Connection pooling is critical for Node.js applications. Without it, every database query opens a new TCP connection, performs a TLS handshake, authenticates, and then tears everything down. That overhead adds 50-100ms per query in a managed database scenario where the database is on a separate host.
Built-in PgBouncer (PostgreSQL)
DigitalOcean includes PgBouncer for PostgreSQL clusters. Configure it through the control panel or API:
# Create a connection pool
curl -X POST "https://api.digitalocean.com/v2/databases/DB_ID/pools" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "my-app-pool",
"mode": "transaction",
"size": 10,
"db": "defaultdb",
"user": "doadmin"
}'
Pool modes matter:
- Transaction mode (recommended for most apps): Connections are returned to the pool after each transaction. This gives you the best connection reuse but disables session-level features like prepared statements and
LISTEN/NOTIFY. - Session mode: Connections are held for the entire client session. Use this if you need prepared statements.
- Statement mode: Connections are returned after each statement. Only useful for simple queries with no multi-statement transactions.
Application-Level Pooling with node-postgres
Even with PgBouncer, you should use application-level pooling via the pg module's Pool class:
var pg = require("pg");
var pool = new pg.Pool({
host: process.env.DB_HOST,
port: parseInt(process.env.DB_PORT, 10) || 25060,
database: process.env.DB_NAME || "defaultdb",
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
ssl: {
rejectUnauthorized: true,
ca: process.env.DB_CA_CERT
},
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000
});
pool.on("error", function(err) {
console.error("Unexpected pool error:", err.message);
});
pool.on("connect", function(client) {
console.log("New client connected to pool");
});
module.exports = pool;
The max value here controls how many connections your Node.js process opens to PgBouncer (or directly to PostgreSQL). A good rule of thumb: set this to the number of concurrent requests your app handles divided by the number of app instances. For a single instance handling 100 concurrent requests, max: 20 is usually sufficient because most queries complete in under 50ms.
SSL/TLS Connection Setup from Node.js
All DigitalOcean managed databases require SSL. The cluster's CA certificate is available from the control panel or API.
Downloading the CA Certificate
# Download via doctl
doctl databases get a1b2c3d4-e5f6-7890-abcd-ef1234567890 --format URIConnection
# Download the CA cert via API
curl -X GET "https://api.digitalocean.com/v2/databases/DB_ID/ca" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
| jq -r '.ca.certificate' > ca-certificate.crt
PostgreSQL SSL Connection
var fs = require("fs");
var pg = require("pg");
var caCert = fs.readFileSync("/path/to/ca-certificate.crt", "utf8");
var pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
ssl: {
rejectUnauthorized: true,
ca: caCert
}
});
// Verify SSL is active
pool.query("SHOW ssl", function(err, result) {
if (err) {
console.error("SSL check failed:", err.message);
return;
}
console.log("SSL enabled:", result.rows[0].ssl); // "on"
});
MySQL SSL Connection
var mysql = require("mysql2");
var fs = require("fs");
var pool = mysql.createPool({
host: process.env.MYSQL_HOST,
port: parseInt(process.env.MYSQL_PORT, 10) || 25060,
user: process.env.MYSQL_USER,
password: process.env.MYSQL_PASSWORD,
database: process.env.MYSQL_DATABASE,
ssl: {
ca: fs.readFileSync("/path/to/ca-certificate.crt"),
rejectUnauthorized: true
},
waitForConnections: true,
connectionLimit: 10,
queueLimit: 0
});
var promisePool = pool.promise();
module.exports = promisePool;
Redis SSL Connection
var Redis = require("ioredis");
var redis = new Redis({
host: process.env.REDIS_HOST,
port: parseInt(process.env.REDIS_PORT, 10) || 25061,
password: process.env.REDIS_PASSWORD,
tls: {
ca: require("fs").readFileSync("/path/to/ca-certificate.crt"),
rejectUnauthorized: true
},
retryStrategy: function(times) {
var delay = Math.min(times * 50, 2000);
return delay;
}
});
redis.on("connect", function() {
console.log("Connected to Redis");
});
redis.on("error", function(err) {
console.error("Redis error:", err.message);
});
module.exports = redis;
Using the Connection String Directly
DigitalOcean provides a full connection URI. For PostgreSQL, you can pass it directly to pg.Pool and append the CA cert:
var pg = require("pg");
var fs = require("fs");
// The connection string already includes sslmode=require
var pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
ssl: {
rejectUnauthorized: true,
ca: fs.readFileSync(process.env.DB_CA_PATH, "utf8")
}
});
Read Replicas and High Availability
Adding Read Replicas
Read replicas are essential once your application is read-heavy. In most web applications, reads outnumber writes 10:1 or more.
# Add a read replica to an existing cluster
doctl databases replica create a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
--size db-s-1vcpu-1gb \
--region nyc1 \
my-app-replica-1
# List replicas
doctl databases replica list a1b2c3d4-e5f6-7890-abcd-ef1234567890
ID Name Status Size
b2c3d4e5-f6a7-8901-bcde-f12345678901 my-app-replica-1 online db-s-1vcpu-1gb
Routing Reads and Writes in Node.js
Create a database module that routes queries to the appropriate endpoint:
var pg = require("pg");
var fs = require("fs");
var caCert = fs.readFileSync(process.env.DB_CA_PATH, "utf8");
var sslConfig = {
rejectUnauthorized: true,
ca: caCert
};
// Primary pool for writes
var primaryPool = new pg.Pool({
host: process.env.DB_PRIMARY_HOST,
port: parseInt(process.env.DB_PORT, 10) || 25060,
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
ssl: sslConfig,
max: 10
});
// Replica pool for reads
var replicaPool = new pg.Pool({
host: process.env.DB_REPLICA_HOST,
port: parseInt(process.env.DB_PORT, 10) || 25060,
database: process.env.DB_NAME,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
ssl: sslConfig,
max: 20
});
// Route queries based on type
function query(text, params, options) {
var useReplica = options && options.readOnly;
var pool = useReplica ? replicaPool : primaryPool;
return pool.query(text, params);
}
// Usage in route handlers:
// Read from replica
// query("SELECT * FROM users WHERE id = $1", [userId], { readOnly: true })
//
// Write to primary
// query("INSERT INTO users (name, email) VALUES ($1, $2)", [name, email])
module.exports = {
query: query,
primaryPool: primaryPool,
replicaPool: replicaPool
};
High Availability Clusters
For production workloads, provision a high-availability cluster with standby nodes:
doctl databases create my-app-db-ha \
--engine pg \
--version 16 \
--size db-s-2vcpu-4gb \
--region nyc1 \
--num-nodes 3
With --num-nodes 3, you get one primary and two standby nodes. Automatic failover happens within 15-30 seconds if the primary becomes unavailable. Your application does not need to change connection strings because DigitalOcean updates the DNS entry for the cluster hostname to point at the new primary.
Backup and Restore Strategies
Automated Backups
DigitalOcean performs daily backups automatically with 7-day retention. You cannot change the backup schedule, but you can trigger manual backups through the API for pre-migration or pre-deployment safety nets.
Point-in-Time Recovery
PostgreSQL clusters support point-in-time recovery (PITR) using WAL archiving. To restore to a specific point:
# Fork a database cluster to a point in time
curl -X POST "https://api.digitalocean.com/v2/databases" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "my-app-db-restored",
"engine": "pg",
"version": "16",
"size": "db-s-1vcpu-1gb",
"region": "nyc1",
"num_nodes": 1,
"backup_restore": {
"database_name": "my-app-db",
"created_at": "2026-02-07T14:30:00Z"
}
}'
This creates a new cluster restored to the specified timestamp. It does not overwrite your existing cluster, which is the right behavior -- you can verify the restored data before swapping.
Manual Exports
For additional safety, schedule pg_dump exports to DigitalOcean Spaces:
#!/bin/bash
# backup.sh - Run via cron on a Droplet or CI/CD pipeline
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="backup_${TIMESTAMP}.sql.gz"
PGPASSWORD=$DB_PASSWORD pg_dump \
-h $DB_HOST \
-p $DB_PORT \
-U $DB_USER \
-d $DB_NAME \
--no-owner \
--no-privileges \
--format=plain \
"sslmode=require&sslrootcert=/path/to/ca-certificate.crt" \
| gzip > /tmp/$BACKUP_FILE
# Upload to Spaces
s3cmd put /tmp/$BACKUP_FILE s3://my-backups/database/$BACKUP_FILE
# Clean up
rm /tmp/$BACKUP_FILE
echo "Backup completed: $BACKUP_FILE"
Migration from Self-Managed to Managed
If you are running PostgreSQL on a Droplet and want to move to managed databases, here is the process:
Step 1: Create the Managed Database
doctl databases create my-app-db-managed \
--engine pg \
--version 16 \
--size db-s-2vcpu-4gb \
--region nyc1 \
--num-nodes 1
Step 2: Dump Your Existing Database
pg_dump -h localhost -U postgres -d myapp \
--no-owner --no-privileges --format=plain \
> myapp_dump.sql
Step 3: Restore to Managed Database
PGPASSWORD=your_managed_pw psql \
"host=your-db-host.db.ondigitalocean.com port=25060 dbname=defaultdb user=doadmin sslmode=require" \
< myapp_dump.sql
Step 4: Restrict Access
Add your application's Droplet or App Platform component as a trusted source:
doctl databases firewalls append a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
--rule app:your-app-uuid
# Or restrict to specific Droplet
doctl databases firewalls append a1b2c3d4-e5f6-7890-abcd-ef1234567890 \
--rule droplet:your-droplet-id
Step 5: Update Application Configuration
Update your environment variables to point at the managed database and deploy. Monitor for connection errors for the first 24 hours.
Performance Tuning Managed Instances
You do not have access to postgresql.conf on a managed instance, but DigitalOcean exposes key configuration parameters through the API.
Adjusting PostgreSQL Parameters
# View current configuration
doctl databases configuration get a1b2c3d4-e5f6-7890-abcd-ef1234567890
# Update configuration
curl -X PATCH "https://api.digitalocean.com/v2/databases/DB_ID/config" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"config": {
"work_mem": 16,
"shared_buffers_percentage": 30,
"effective_cache_size_percentage": 70,
"max_parallel_workers_per_gather": 2,
"log_min_duration_statement": 500,
"pg_stat_statements.track": "all",
"idle_in_transaction_session_timeout": 60000
}
}'
Key Parameters to Tune
| Parameter | Default | Recommendation | Why |
|---|---|---|---|
work_mem |
4MB | 8-32MB | Larger sort/hash operations in memory |
shared_buffers_percentage |
25% | 25-35% | Buffer cache for frequently accessed data |
effective_cache_size_percentage |
50% | 60-75% | Helps query planner choose index scans |
log_min_duration_statement |
-1 (off) | 500ms | Log slow queries for optimization |
idle_in_transaction_session_timeout |
0 (off) | 60000ms | Kill idle transactions holding locks |
Query Optimization
Use pg_stat_statements to find your slowest queries:
-- Enable pg_stat_statements (already available on DO managed)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Top 10 slowest queries by total time
SELECT
substring(query, 1, 80) AS query_preview,
calls,
round(total_exec_time::numeric, 2) AS total_ms,
round(mean_exec_time::numeric, 2) AS mean_ms,
round((100 * total_exec_time / sum(total_exec_time) OVER ())::numeric, 2) AS pct
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;
query_preview | calls | total_ms | mean_ms | pct
-------------------------------+-------+-----------+---------+------
SELECT * FROM orders WHERE use | 45230 | 128450.23 | 2.84 | 32.10
SELECT p.*, c.name FROM produ | 12840 | 67230.11 | 5.24 | 16.80
INSERT INTO audit_log (event, | 8920 | 44120.55 | 4.95 | 11.02
UPDATE users SET last_login = | 34100 | 38900.12 | 1.14 | 9.72
Index Analysis
-- Find missing indexes (tables with sequential scans on large tables)
SELECT
schemaname,
relname AS table_name,
seq_scan,
seq_tup_read,
idx_scan,
n_live_tup AS row_count,
round(seq_tup_read::numeric / GREATEST(seq_scan, 1), 0) AS avg_rows_per_scan
FROM pg_stat_user_tables
WHERE seq_scan > 100
AND n_live_tup > 10000
AND (idx_scan IS NULL OR idx_scan < seq_scan)
ORDER BY seq_tup_read DESC
LIMIT 10;
Monitoring Database Metrics
Built-in Metrics
DigitalOcean provides metrics through the control panel and API. Retrieve them programmatically:
# Get database metrics
curl -s "https://api.digitalocean.com/v2/databases/DB_ID/metrics/credentials" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN"
Application-Level Monitoring
Build health checks and monitoring directly into your Node.js application:
var pool = require("./db");
var dbMetrics = {
totalQueries: 0,
failedQueries: 0,
avgResponseTime: 0,
activeConnections: 0
};
function trackQuery(startTime, err) {
var duration = Date.now() - startTime;
dbMetrics.totalQueries++;
if (err) {
dbMetrics.failedQueries++;
}
// Exponential moving average
dbMetrics.avgResponseTime =
dbMetrics.avgResponseTime * 0.9 + duration * 0.1;
dbMetrics.activeConnections = pool.totalCount - pool.idleCount;
}
function monitoredQuery(text, params, callback) {
var startTime = Date.now();
pool.query(text, params, function(err, result) {
trackQuery(startTime, err);
callback(err, result);
});
}
// Health check endpoint
function healthCheck(req, res) {
var startTime = Date.now();
pool.query("SELECT 1 AS healthy", function(err, result) {
var responseTime = Date.now() - startTime;
if (err) {
return res.status(503).json({
status: "unhealthy",
error: err.message,
responseTime: responseTime
});
}
res.json({
status: "healthy",
responseTime: responseTime,
pool: {
total: pool.totalCount,
idle: pool.idleCount,
waiting: pool.waitingCount
},
metrics: dbMetrics
});
});
}
module.exports = {
query: monitoredQuery,
healthCheck: healthCheck,
metrics: dbMetrics
};
Register the health check in your Express app:
var dbMonitor = require("./db-monitor");
app.get("/health/db", dbMonitor.healthCheck);
A healthy response looks like:
{
"status": "healthy",
"responseTime": 12,
"pool": {
"total": 5,
"idle": 3,
"waiting": 0
},
"metrics": {
"totalQueries": 45230,
"failedQueries": 3,
"avgResponseTime": 4.2,
"activeConnections": 2
}
}
Cost Optimization and Right-Sizing
Sizing Guidelines
| Workload | Size | Monthly Cost | Use Case |
|---|---|---|---|
| Development | db-s-1vcpu-1gb | ~$15 | Local dev, staging |
| Small Production | db-s-1vcpu-2gb | ~$30 | < 1000 daily users |
| Medium Production | db-s-2vcpu-4gb | ~$60 | 1000-10000 daily users |
| Large Production | db-s-4vcpu-8gb | ~$120 | 10000+ daily users |
| High Performance | db-s-8vcpu-16gb | ~$240 | Heavy analytics, large datasets |
Cost Saving Strategies
Use connection pooling aggressively. A $15/month node with PgBouncer in transaction mode can handle far more concurrent users than you think. I have run applications serving 5000 daily users on the smallest managed database node because the queries were efficient and the connection pool was properly configured.
Schedule read replicas. If you only need read replicas during business hours for reporting, you can create and destroy them via the API on a schedule:
var https = require("https");
function createReplica(clusterId, token, callback) {
var data = JSON.stringify({
name: "business-hours-replica",
size: "db-s-1vcpu-2gb"
});
var options = {
hostname: "api.digitalocean.com",
path: "/v2/databases/" + clusterId + "/replicas",
method: "POST",
headers: {
"Authorization": "Bearer " + token,
"Content-Type": "application/json",
"Content-Length": Buffer.byteLength(data)
}
};
var req = https.request(options, function(res) {
var body = "";
res.on("data", function(chunk) { body += chunk; });
res.on("end", function() {
callback(null, JSON.parse(body));
});
});
req.on("error", callback);
req.write(data);
req.end();
}
function destroyReplica(clusterId, replicaName, token, callback) {
var options = {
hostname: "api.digitalocean.com",
path: "/v2/databases/" + clusterId + "/replicas/" + replicaName,
method: "DELETE",
headers: {
"Authorization": "Bearer " + token
}
};
var req = https.request(options, function(res) {
callback(null, res.statusCode);
});
req.on("error", callback);
req.end();
}
Monitor and right-size quarterly. Check your CPU and memory utilization monthly. If your database consistently uses less than 30% of its resources, downsize. If it spikes above 80%, upsize before performance degrades.
# Check current database size and usage
doctl databases get DB_ID --format Name,Size,Status,NumNodes
Complete Working Example
Here is a production-ready Node.js module for connecting to a DigitalOcean managed PostgreSQL database with connection pooling, SSL, health checks, and failover handling:
// db.js - Production database module for DigitalOcean Managed PostgreSQL
var pg = require("pg");
var fs = require("fs");
var EventEmitter = require("events");
var dbEvents = new EventEmitter();
// Load CA certificate
var caCert;
if (process.env.DB_CA_PATH) {
caCert = fs.readFileSync(process.env.DB_CA_PATH, "utf8");
} else if (process.env.DB_CA_CERT) {
caCert = process.env.DB_CA_CERT;
}
var poolConfig = {
host: process.env.DB_HOST,
port: parseInt(process.env.DB_PORT, 10) || 25060,
database: process.env.DB_NAME || "defaultdb",
user: process.env.DB_USER || "doadmin",
password: process.env.DB_PASSWORD,
ssl: caCert ? { rejectUnauthorized: true, ca: caCert } : { rejectUnauthorized: false },
max: parseInt(process.env.DB_POOL_MAX, 10) || 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 10000,
statement_timeout: 30000
};
var pool = new pg.Pool(poolConfig);
// Connection lifecycle logging
pool.on("connect", function(client) {
console.log("[DB] New client connected. Pool size:", pool.totalCount);
});
pool.on("acquire", function() {
dbEvents.emit("acquire");
});
pool.on("remove", function() {
console.log("[DB] Client removed from pool. Pool size:", pool.totalCount);
});
pool.on("error", function(err, client) {
console.error("[DB] Unexpected pool error:", err.message);
// Handle specific DigitalOcean failover scenarios
if (err.code === "ECONNREFUSED" || err.code === "ENOTFOUND") {
console.error("[DB] Connection refused - possible failover in progress");
dbEvents.emit("failover");
}
});
// Metrics tracking
var metrics = {
queries: 0,
errors: 0,
totalDuration: 0,
slowQueries: 0,
lastError: null,
startedAt: new Date().toISOString()
};
// Query wrapper with monitoring and retry
function query(text, params, callback) {
var startTime = Date.now();
var attempt = 0;
var maxRetries = 2;
function executeQuery() {
attempt++;
pool.query(text, params, function(err, result) {
var duration = Date.now() - startTime;
metrics.queries++;
metrics.totalDuration += duration;
if (duration > 1000) {
metrics.slowQueries++;
console.warn("[DB] Slow query (" + duration + "ms):", text.substring(0, 100));
}
if (err) {
metrics.errors++;
metrics.lastError = {
message: err.message,
code: err.code,
timestamp: new Date().toISOString()
};
// Retry on connection errors (failover scenario)
var retryableCodes = [
"ECONNREFUSED",
"ECONNRESET",
"EPIPE",
"57P01", // admin_shutdown
"57P02", // crash_shutdown
"57P03" // cannot_connect_now
];
if (retryableCodes.indexOf(err.code) !== -1 && attempt <= maxRetries) {
var delay = attempt * 1000;
console.warn("[DB] Retrying query in " + delay + "ms (attempt " + attempt + "/" + maxRetries + ")");
setTimeout(executeQuery, delay);
return;
}
return callback(err);
}
callback(null, result);
});
}
executeQuery();
}
// Transaction helper
function transaction(queries, callback) {
pool.connect(function(err, client, release) {
if (err) {
return callback(err);
}
client.query("BEGIN", function(err) {
if (err) {
release();
return callback(err);
}
var results = [];
var index = 0;
function runNext() {
if (index >= queries.length) {
client.query("COMMIT", function(err) {
release();
if (err) return callback(err);
callback(null, results);
});
return;
}
var q = queries[index];
client.query(q.text, q.params, function(err, result) {
if (err) {
client.query("ROLLBACK", function() {
release();
callback(err);
});
return;
}
results.push(result);
index++;
runNext();
});
}
runNext();
});
});
}
// Health check
function healthCheck(callback) {
var startTime = Date.now();
pool.query("SELECT 1 AS ok, now() AS server_time", function(err, result) {
var responseTime = Date.now() - startTime;
if (err) {
return callback(null, {
status: "unhealthy",
error: err.message,
responseTime: responseTime,
pool: {
total: pool.totalCount,
idle: pool.idleCount,
waiting: pool.waitingCount
}
});
}
callback(null, {
status: "healthy",
responseTime: responseTime,
serverTime: result.rows[0].server_time,
pool: {
total: pool.totalCount,
idle: pool.idleCount,
waiting: pool.waitingCount
},
metrics: {
queries: metrics.queries,
errors: metrics.errors,
avgDuration: metrics.queries > 0
? Math.round(metrics.totalDuration / metrics.queries)
: 0,
slowQueries: metrics.slowQueries,
lastError: metrics.lastError,
uptime: metrics.startedAt
}
});
});
}
// Graceful shutdown
function shutdown(callback) {
console.log("[DB] Shutting down connection pool...");
pool.end(function(err) {
if (err) {
console.error("[DB] Error during pool shutdown:", err.message);
} else {
console.log("[DB] Pool shut down cleanly.");
}
if (callback) callback(err);
});
}
module.exports = {
query: query,
transaction: transaction,
healthCheck: healthCheck,
shutdown: shutdown,
pool: pool,
events: dbEvents,
metrics: metrics
};
Usage in an Express application:
// app.js
var express = require("express");
var db = require("./db");
var app = express();
app.use(express.json());
// Health check endpoint
app.get("/health/db", function(req, res) {
db.healthCheck(function(err, health) {
var statusCode = health.status === "healthy" ? 200 : 503;
res.status(statusCode).json(health);
});
});
// Example API route using the database
app.get("/api/users/:id", function(req, res) {
db.query(
"SELECT id, name, email, created_at FROM users WHERE id = $1",
[req.params.id],
function(err, result) {
if (err) {
console.error("Query error:", err.message);
return res.status(500).json({ error: "Database error" });
}
if (result.rows.length === 0) {
return res.status(404).json({ error: "User not found" });
}
res.json(result.rows[0]);
}
);
});
// Example transaction
app.post("/api/orders", function(req, res) {
var order = req.body;
db.transaction([
{
text: "INSERT INTO orders (user_id, total) VALUES ($1, $2) RETURNING id",
params: [order.userId, order.total]
},
{
text: "INSERT INTO order_items (order_id, product_id, quantity, price) SELECT $1, unnest($2::int[]), unnest($3::int[]), unnest($4::numeric[])",
params: [null, order.productIds, order.quantities, order.prices]
},
{
text: "UPDATE inventory SET stock = stock - 1 WHERE product_id = ANY($1)",
params: [order.productIds]
}
], function(err, results) {
if (err) {
console.error("Transaction error:", err.message);
return res.status(500).json({ error: "Order creation failed" });
}
res.status(201).json({ orderId: results[0].rows[0].id });
});
});
// Graceful shutdown
process.on("SIGTERM", function() {
console.log("SIGTERM received. Shutting down gracefully...");
db.shutdown(function() {
process.exit(0);
});
});
var port = process.env.PORT || 8080;
app.listen(port, function() {
console.log("Server running on port " + port);
});
Common Issues and Troubleshooting
1. SSL Certificate Verification Failures
Error: self signed certificate in certificate chain
at TLSSocket.onConnectSecure (node:tls:1538:34)
This happens when you set rejectUnauthorized: true but do not provide the CA certificate. DigitalOcean managed databases use certificates signed by their own CA. You must download and provide the CA certificate:
// Wrong - will fail with self-signed cert error
var pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
ssl: { rejectUnauthorized: true }
});
// Right - provide the CA cert
var pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
ssl: {
rejectUnauthorized: true,
ca: fs.readFileSync("/path/to/ca-certificate.crt", "utf8")
}
});
Do not "fix" this by setting rejectUnauthorized: false in production. That disables certificate validation entirely and makes you vulnerable to man-in-the-middle attacks.
2. Connection Refused After Failover
Error: connect ECONNREFUSED 10.133.45.67:25060
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1595:16)
During automatic failover (primary node replacement), there is a 15-30 second window where connections will fail. The DNS record updates to point to the new primary, but your application may have cached the old IP address.
Fix: Enable DNS rotation and implement retry logic:
var dns = require("dns");
dns.setDefaultResultOrder("verbatim");
// Use the hostname, not an IP address, in your connection config
// The pg module will re-resolve DNS on new connections
var pool = new pg.Pool({
host: "your-db-host.db.ondigitalocean.com", // hostname, not IP
// ... rest of config
});
3. Too Many Connections
FATAL: too many connections for role "doadmin"
at Parser.parseErrorMessage (/app/node_modules/pg-protocol/dist/parser.js:287:98)
The smallest managed database nodes allow 25 connections. With PgBouncer, that limit applies to PgBouncer-to-PostgreSQL connections, not client-to-PgBouncer connections. If you are connecting directly (bypassing the pool port), each Node.js process with max: 20 eats most of your connection budget.
Fix: Either use the PgBouncer connection pool endpoint (port 25061 for the pool, versus 25060 for direct) or reduce your max pool size:
// Connect through PgBouncer (different port!)
var pool = new pg.Pool({
host: process.env.DB_HOST,
port: 25061, // PgBouncer pool port
// ... rest of config
max: 5 // Can be lower because PgBouncer multiplexes
});
4. Prepared Statement Errors with PgBouncer Transaction Mode
ERROR: prepared statement "lrupsc_1_0" does not exist
This is a classic PgBouncer issue. In transaction mode, prepared statements created in one transaction are not available in the next because you may get a different backend connection. The pg module uses prepared statements by default when you pass a name property or when it auto-generates them.
Fix: Disable prepared statements when using PgBouncer in transaction mode:
// Option 1: Pass queries without names
pool.query({ text: "SELECT * FROM users WHERE id = $1", values: [1] }, callback);
// Option 2: Use a module-level setting to disable named statements
// In pg v8+, you can set preparedStatements to false in PgBouncer connection
var pool = new pg.Pool({
host: process.env.DB_HOST,
port: 25061,
// ... other config
});
// Or switch to session mode in PgBouncer if you need prepared statements
5. Idle Connection Timeouts
Connection terminated unexpectedly
at Connection.<anonymous> (/app/node_modules/pg/lib/client.js:132:73)
DigitalOcean's load balancer drops idle TCP connections after a timeout (typically 60 seconds). If your pool holds idle connections longer than this, they become stale.
Fix: Set idleTimeoutMillis lower than the load balancer timeout and enable TCP keepalives:
var pool = new pg.Pool({
// ... connection config
idleTimeoutMillis: 30000, // Close idle clients after 30s
keepAlive: true, // Enable TCP keepalive
keepAliveInitialDelayMillis: 10000 // Start keepalive after 10s idle
});
Best Practices
Always use SSL with certificate verification. Set
rejectUnauthorized: trueand provide the CA certificate. Never disable SSL verification in production, no matter how tempting it is when you are debugging at midnight.Use connection pooling at both layers. Configure PgBouncer on the managed database side in transaction mode and use
pg.Poolon the application side. This gives you two levels of connection multiplexing and dramatically reduces connection overhead.Implement retry logic for transient failures. Managed databases undergo maintenance, failovers, and version upgrades. Your application must handle temporary connection loss gracefully with exponential backoff, not crash and require a manual restart.
Restrict access with database firewalls. Never leave your managed database open to all IP addresses. Use
doctl databases firewallsto whitelist only your application's Droplets, App Platform apps, or Kubernetes clusters. This is not optional -- it is the first thing you should configure after provisioning.Monitor pool saturation, not just query performance. If
pool.waitingCountis consistently greater than zero, your application is starved for connections. Either increasemax, optimize slow queries to release connections faster, or scale to a larger database node.Set
statement_timeoutandidle_in_transaction_session_timeout. Runaway queries and forgotten transactions can lock tables and exhaust connections. A 30-second statement timeout and 60-second idle transaction timeout prevent most cascading failures.Use read replicas for reporting and analytics queries. Never run expensive analytical queries against your primary database. Create a read replica and route
SELECT-only reporting queries there. Your transactional workload will thank you.Test failover before you need it. DigitalOcean does not provide a "simulate failover" button, but you can test your retry logic by temporarily misconfiguring the connection string, observing the error handling, and then restoring it. Better to discover that your reconnection logic is broken during a Tuesday afternoon test than during a Saturday night incident.
Gracefully shut down connection pools on process exit. Call
pool.end()onSIGTERMto close connections cleanly instead of leaving orphaned connections that count against your connection limit until the server times them out.
