DigitalOcean Managed Databases: Setup and Optimization
A practical guide to DigitalOcean Managed Databases covering PostgreSQL and MongoDB setup, connection pooling, performance tuning, backups, and Node.js integration.
DigitalOcean Managed Databases: Setup and Optimization
Managing a database server is operational overhead most application developers should avoid. Patching security vulnerabilities, configuring replication, managing backups, monitoring disk space, tuning query performance — each task requires database-specific expertise that distracts from building your application.
DigitalOcean Managed Databases handle the operational work: automated backups, failover, security patches, and monitoring. You get a connection string and a dashboard. This guide covers setting up managed PostgreSQL and MongoDB for Node.js applications, optimizing connections, and tuning for production workloads.
Prerequisites
- A DigitalOcean account
- Node.js installed (v16+)
- An application that uses PostgreSQL or MongoDB
Creating a Managed Database
Via Dashboard
- Navigate to Databases in the DigitalOcean dashboard
- Choose your database engine (PostgreSQL, MySQL, MongoDB, Redis, Kafka)
- Select a plan:
- Basic ($15/month) — 1 vCPU, 1GB RAM, 10GB storage
- General Purpose ($60/month) — 2 vCPU, 4GB RAM, 20GB storage
- Memory-Optimized — for large datasets
- Choose a datacenter region (same region as your application)
- Name your database cluster
- Click Create Database Cluster
Provisioning takes a few minutes. DigitalOcean sets up the primary node, configures SSL, and generates connection credentials.
Via CLI
# Create PostgreSQL cluster
doctl databases create my-db \
--engine pg \
--version 15 \
--size db-s-1vcpu-1gb \
--region nyc3 \
--num-nodes 1
# Create MongoDB cluster
doctl databases create my-mongo \
--engine mongodb \
--version 6 \
--size db-s-1vcpu-1gb \
--region nyc3 \
--num-nodes 1
Connecting from Node.js
PostgreSQL Connection
npm install pg
// db.js
var { Pool } = require("pg");
var pool = new Pool({
connectionString: process.env.DATABASE_URL,
ssl: {
rejectUnauthorized: true,
ca: process.env.DATABASE_CA_CERT || undefined
},
max: 20,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000
});
pool.on("error", function(err) {
console.error("Unexpected database error:", err);
});
module.exports = {
query: function(text, params) {
return pool.query(text, params);
},
getClient: function() {
return pool.connect();
},
end: function() {
return pool.end();
}
};
The connection string from DigitalOcean looks like:
postgresql://doadmin:[email protected]:25060/defaultdb?sslmode=require
MongoDB Connection
npm install mongodb
// mongo.js
var MongoClient = require("mongodb").MongoClient;
var client = null;
var db = null;
function connect() {
if (db) return Promise.resolve(db);
return MongoClient.connect(process.env.MONGO_URL, {
tls: true,
tlsCAFile: process.env.MONGO_CA_CERT || undefined,
maxPoolSize: 20,
minPoolSize: 5,
serverSelectionTimeoutMS: 5000,
socketTimeoutMS: 45000
}).then(function(c) {
client = c;
db = client.db(process.env.MONGO_DB || "myapp");
console.log("Connected to MongoDB");
return db;
});
}
function getDb() {
if (!db) throw new Error("Database not connected. Call connect() first.");
return db;
}
function close() {
if (client) return client.close();
return Promise.resolve();
}
module.exports = { connect: connect, getDb: getDb, close: close };
Connection Pooling
Why Connection Pooling Matters
Each database connection uses server resources (memory, file descriptors, process slots). Without pooling, every request opens a new connection — slow and wasteful. With pooling, connections are reused across requests.
PostgreSQL Connection Pool Settings
var pool = new Pool({
connectionString: process.env.DATABASE_URL,
ssl: { rejectUnauthorized: true },
// Pool configuration
max: 20, // Maximum connections in the pool
min: 2, // Minimum idle connections
idleTimeoutMillis: 30000, // Close idle connections after 30 seconds
connectionTimeoutMillis: 5000, // Fail if connection takes > 5 seconds
allowExitOnIdle: true // Allow process to exit when pool is idle
});
Sizing the pool:
- A basic managed database supports ~25 connections
- Reserve 5 connections for admin tools and monitoring
- Set
maxto 20 for a single application instance - For multiple instances (cluster mode), divide: 20 total / 4 instances =
max: 5per instance
DigitalOcean Connection Pools (PgBouncer)
DigitalOcean offers built-in connection pooling via PgBouncer for PostgreSQL:
- Navigate to your database cluster > Connection Pools
- Create a pool:
- Name: myapp-pool
- Database: defaultdb
- User: doadmin
- Mode: Transaction (recommended for web applications)
- Size: 22
Use the pool's connection string instead of the direct connection:
postgresql://doadmin:[email protected]:25061/myapp-pool?sslmode=require
Note the different port (25061 vs 25060). The pool port connects through PgBouncer.
Pool modes:
- Transaction — connections are returned to the pool after each transaction. Best for web applications.
- Session — connections are held for the entire client session. Use only if your application uses session-level features (prepared statements, LISTEN/NOTIFY).
- Statement — connections are returned after each statement. Most restrictive but highest concurrency.
Performance Optimization
PostgreSQL Performance
// Analyze slow queries
db.query("EXPLAIN ANALYZE SELECT * FROM users WHERE email = $1", ["[email protected]"])
.then(function(result) {
result.rows.forEach(function(row) {
console.log(row["QUERY PLAN"]);
});
});
Creating Indexes
-- Identify missing indexes
SELECT schemaname, tablename, attname, n_distinct, correlation
FROM pg_stats
WHERE tablename = 'users'
ORDER BY n_distinct DESC;
-- Add indexes for frequent queries
CREATE INDEX idx_users_email ON users (email);
CREATE INDEX idx_orders_user_id ON orders (user_id);
CREATE INDEX idx_orders_created_at ON orders (created_at DESC);
CREATE INDEX idx_articles_status_created ON articles (status, created_at DESC);
Query Optimization
// BAD — N+1 query problem
function getUsersWithOrders() {
return db.query("SELECT * FROM users").then(function(users) {
return Promise.all(users.rows.map(function(user) {
return db.query("SELECT * FROM orders WHERE user_id = $1", [user.id])
.then(function(orders) {
user.orders = orders.rows;
return user;
});
}));
});
}
// GOOD — single query with JOIN
function getUsersWithOrders() {
return db.query(
"SELECT u.*, json_agg(o.*) as orders " +
"FROM users u " +
"LEFT JOIN orders o ON u.id = o.user_id " +
"GROUP BY u.id " +
"ORDER BY u.created_at DESC"
).then(function(result) {
return result.rows;
});
}
Monitoring Query Performance
// Log slow queries
var originalQuery = pool.query.bind(pool);
pool.query = function(text, params) {
var start = Date.now();
return originalQuery(text, params).then(function(result) {
var duration = Date.now() - start;
if (duration > 1000) {
console.warn("Slow query (" + duration + "ms):", text);
}
return result;
});
};
Security Configuration
Trusted Sources
Restrict database access to specific IP addresses or VPC networks:
# Add a trusted source (your Droplet or App Platform app)
doctl databases firewalls append YOUR_DB_ID \
--rule droplet:YOUR_DROPLET_ID
# Or allow a specific IP
doctl databases firewalls append YOUR_DB_ID \
--rule ip_addr:203.0.113.50
SSL Connections
Always use SSL for managed database connections:
var pool = new Pool({
connectionString: process.env.DATABASE_URL,
ssl: {
rejectUnauthorized: true,
// Download CA cert from database dashboard
ca: require("fs").readFileSync("ca-certificate.crt").toString()
}
});
Database Users
Create application-specific users with limited permissions:
-- Create a read-write user for the application
CREATE USER myapp_user WITH PASSWORD 'secure-password';
GRANT CONNECT ON DATABASE defaultdb TO myapp_user;
GRANT USAGE ON SCHEMA public TO myapp_user;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO myapp_user;
GRANT USAGE, SELECT ON ALL SEQUENCES IN SCHEMA public TO myapp_user;
-- Create a read-only user for reporting
CREATE USER myapp_reader WITH PASSWORD 'reader-password';
GRANT CONNECT ON DATABASE defaultdb TO myapp_reader;
GRANT USAGE ON SCHEMA public TO myapp_reader;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO myapp_reader;
Backups and Recovery
Automatic Backups
DigitalOcean takes daily backups automatically. Configure the backup window in the dashboard.
Manual Backups
# Export via pg_dump (use the connection string from the dashboard)
pg_dump "postgresql://doadmin:password@host:25060/defaultdb?sslmode=require" > backup.sql
# Compressed backup
pg_dump "postgresql://doadmin:password@host:25060/defaultdb?sslmode=require" | gzip > backup.sql.gz
Point-in-Time Recovery
DigitalOcean supports restoring to any point within the backup retention window:
- Navigate to your database cluster > Backups
- Click Restore and select a point in time
- This creates a new database cluster with data from that moment
Automated Backup Script
// backup.js
var { exec } = require("child_process");
var path = require("path");
var BACKUP_DIR = "/var/backups/database";
var DB_URL = process.env.DATABASE_URL;
var date = new Date().toISOString().replace(/[:.]/g, "-");
var filename = "backup-" + date + ".sql.gz";
var filepath = path.join(BACKUP_DIR, filename);
var command = 'pg_dump "' + DB_URL + '" | gzip > ' + filepath;
exec(command, function(err) {
if (err) {
console.error("Backup failed:", err.message);
process.exit(1);
}
console.log("Backup saved:", filepath);
// Clean up backups older than 30 days
exec('find ' + BACKUP_DIR + ' -name "backup-*.sql.gz" -mtime +30 -delete', function(err) {
if (err) console.warn("Cleanup warning:", err.message);
});
});
Read Replicas
For read-heavy workloads, add read replicas to distribute query load.
Creating a Read Replica
doctl databases replica create YOUR_DB_ID \
--name read-replica-1 \
--size db-s-1vcpu-1gb \
--region nyc3
Using Read Replicas in Node.js
// db.js — separate pools for reads and writes
var { Pool } = require("pg");
var writePool = new Pool({
connectionString: process.env.DATABASE_URL,
ssl: { rejectUnauthorized: true },
max: 10
});
var readPool = new Pool({
connectionString: process.env.DATABASE_REPLICA_URL,
ssl: { rejectUnauthorized: true },
max: 20
});
module.exports = {
// Use for INSERT, UPDATE, DELETE
write: function(text, params) {
return writePool.query(text, params);
},
// Use for SELECT queries
read: function(text, params) {
return readPool.query(text, params);
}
};
// Usage
var db = require("./db");
// Reads go to the replica
app.get("/api/articles", function(req, res) {
db.read("SELECT * FROM articles WHERE status = 'published' ORDER BY created_at DESC LIMIT 20")
.then(function(result) {
res.json(result.rows);
});
});
// Writes go to the primary
app.post("/api/articles", function(req, res) {
db.write("INSERT INTO articles (title, content) VALUES ($1, $2) RETURNING *",
[req.body.title, req.body.content])
.then(function(result) {
res.status(201).json(result.rows[0]);
});
});
Monitoring and Alerts
Database Metrics Dashboard
DigitalOcean provides metrics in the database dashboard:
- CPU utilization — sustained high CPU indicates query optimization needed
- Memory usage — high memory suggests the dataset is larger than RAM
- Disk I/O — high I/O means queries are reading from disk, not cache
- Connection count — approaching the limit means pooling is misconfigured
- Replication lag — for read replicas, shows how far behind the primary
Application-Level Monitoring
// Monitor pool statistics
setInterval(function() {
console.log("DB Pool:", {
total: pool.totalCount,
idle: pool.idleCount,
waiting: pool.waitingCount
});
}, 60000);
Common Issues and Troubleshooting
"too many connections" error
The application opened more connections than the database allows:
Fix: Use connection pooling (both in application and PgBouncer). Reduce max pool size across all instances. Check for connection leaks — ensure every acquired client is released. A basic managed database allows ~25 connections.
Queries are slow after initial setup
The database has not gathered statistics yet:
Fix: Run ANALYZE on your tables after initial data load. Create indexes for columns used in WHERE, JOIN, and ORDER BY clauses. Check EXPLAIN ANALYZE output for sequential scans on large tables.
Connection timeout errors
The application cannot reach the database:
Fix: Verify the database firewall allows connections from your application's IP or Droplet. Check that SSL configuration matches (sslmode=require). Ensure you are using the correct port (25060 for direct, 25061 for pooled connections).
High replication lag on read replicas
Write-heavy workloads can cause replicas to fall behind:
Fix: Monitor replication lag in the dashboard. For time-sensitive reads, use the primary. Consider a larger replica instance for write-heavy workloads. Accept eventual consistency for non-critical reads.
Best Practices
- Choose the same datacenter region as your application. Network latency between regions adds 50-200ms to every query. Same-region latency is typically under 1ms.
- Use connection pooling at every level. Application-level pooling (pg Pool) plus PgBouncer together handle high concurrency efficiently.
- Create application-specific database users. Never use the
doadminsuperuser for application connections. Create users with only the permissions they need. - Enable SSL and restrict trusted sources. Never expose your database to the public internet. Use firewall rules to allow only your application servers.
- Monitor connection counts. A sudden spike in connections usually indicates a connection leak. Track pool statistics and set alerts for approaching connection limits.
- Use read replicas for read-heavy workloads. Replicas offload SELECT queries from the primary, improving write performance and read throughput.
- Test migrations on a fork first. DigitalOcean allows forking your database cluster. Fork it, run the migration, and verify before applying to production.
- Set appropriate pool sizes. Total connections across all application instances should not exceed 80% of the database's connection limit.