Digitalocean

DigitalOcean Droplet Optimization for Node.js

A hands-on guide to optimizing DigitalOcean Droplets for Node.js applications, covering PM2 cluster mode, Nginx reverse proxy, SSL, monitoring, security hardening, and deployment automation.

DigitalOcean Droplet Optimization for Node.js

Overview

Running Node.js on a DigitalOcean Droplet gives you full control over your server environment -- something managed platforms trade away for convenience. This guide walks through every step of turning a bare Ubuntu Droplet into a hardened, optimized production environment for Node.js applications, covering PM2 process management, Nginx reverse proxying, SSL, monitoring, security, and deployment automation. I have been running production Node.js workloads on Droplets for years and these are the patterns that have held up.

Prerequisites

  • A DigitalOcean account with billing configured
  • An SSH key pair generated on your local machine (ssh-keygen -t ed25519)
  • Basic Linux command line familiarity
  • A domain name pointed at DigitalOcean nameservers (for SSL)
  • Node.js and npm installed locally for development
  • A working Express.js application ready to deploy

Choosing the Right Droplet Size

The first decision is what Droplet plan to run. DigitalOcean offers several categories and the wrong choice will either waste money or leave your application starved for resources.

General Purpose Droplets are the default. They offer a balanced ratio of CPU to RAM and are appropriate for most Node.js web applications and APIs. A single Express.js app serving moderate traffic (tens of thousands of requests per day) runs comfortably on a 2 vCPU / 4 GB RAM general purpose Droplet.

CPU-Optimized Droplets have a higher ratio of CPU to RAM. Use these when your Node.js application does heavy computation -- image processing, PDF generation, complex data transformations, or CPU-bound API endpoints. They cost more per GB of RAM but give you dedicated vCPUs with guaranteed performance.

Memory-Optimized Droplets are useful when your application holds large datasets in memory -- caching layers, in-process search indexes, or applications that aggregate large result sets before responding.

My recommendation for most Node.js apps: Start with the $24/month General Purpose Droplet (2 vCPU / 4 GB). Node.js is single-threaded by default, so you get one core for your app and one for the OS, Nginx, and background tasks. If PM2 cluster mode is running (covered below), those two vCPUs let you run two worker processes, which doubles your throughput for CPU-bound work.

Plan                   vCPUs    RAM     Price/mo   Best For
──────────────────────────────────────────────────────────────
Basic (shared CPU)     1        1 GB    $6         Dev/staging
Basic (shared CPU)     2        2 GB    $18        Low-traffic sites
General Purpose        2        4 GB    $24        Most production apps
CPU-Optimized          4        8 GB    $84        CPU-heavy workloads
Memory-Optimized       2        16 GB   $84        Large in-memory datasets

Initial Server Setup

Every Droplet starts as a blank Ubuntu installation running as root. The first thing you do is lock that down.

Create a Non-Root User

# SSH into the Droplet as root
ssh root@your_droplet_ip

# Create a deploy user
adduser deploy

# Add to sudo group
usermod -aG sudo deploy

# Copy your SSH key to the new user
rsync --archive --chown=deploy:deploy ~/.ssh /home/deploy

Configure SSH

Edit the SSH daemon configuration to disable root login and password authentication:

sudo nano /etc/ssh/sshd_config

Set these values:

PermitRootLogin no
PasswordAuthentication no
PubkeyAuthentication yes

Restart SSH:

sudo systemctl restart sshd

From this point forward, always SSH in as deploy:

ssh deploy@your_droplet_ip

Configure the Firewall

Ubuntu ships with ufw (Uncomplicated Firewall). Enable it and only open the ports you need:

# Allow SSH
sudo ufw allow OpenSSH

# Allow HTTP and HTTPS (for Nginx)
sudo ufw allow 'Nginx Full'

# Enable the firewall
sudo ufw enable

# Verify
sudo ufw status

Expected output:

Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Nginx Full                 ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Nginx Full (v6)            ALLOW       Anywhere (v6)

Notice that port 3000 (or whatever your Node.js app listens on) is not open. That is intentional. Traffic hits Nginx on ports 80/443 and Nginx proxies to Node.js internally. Your app is never directly exposed to the internet.


Installing Node.js with nvm

Do not install Node.js from the Ubuntu package manager. Those packages are outdated and difficult to manage across versions. Use nvm (Node Version Manager) instead.

# Install nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash

# Reload shell
source ~/.bashrc

# Install Node.js LTS
nvm install --lts

# Verify
node -v
npm -v

Output:

v20.17.0
10.8.2

The advantage of nvm is zero-downtime Node.js upgrades. Install the new version, test your app, then switch the default:

nvm install 22
nvm alias default 22

Running Node.js with PM2

Never run Node.js with node app.js in production. One unhandled exception and your application is dead until someone manually restarts it. PM2 is a production process manager that handles automatic restarts, clustering, log management, and startup scripts.

Install PM2

npm install -g pm2

Start Your Application in Cluster Mode

Cluster mode forks your application across all available CPU cores. Each fork handles requests independently, which multiplies your throughput on multi-core Droplets.

# Start with cluster mode, using all available CPUs
pm2 start app.js --name "my-app" -i max

# Or specify the exact number of instances
pm2 start app.js --name "my-app" -i 2

Output:

┌─────┬──────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id  │ name     │ namespace   │ version │ mode    │ pid      │ uptime │ ↺    │ status    │ cpu      │ mem      │ user     │ watching │
├─────┼──────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0   │ my-app   │ default     │ 1.0.0   │ cluster │ 12847    │ 0s     │ 0    │ online    │ 0%       │ 52.1mb   │ deploy   │ disabled │
│ 1   │ my-app   │ default     │ 1.0.0   │ cluster │ 12854    │ 0s     │ 0    │ online    │ 0%       │ 48.3mb   │ deploy   │ disabled │
└─────┴──────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘

Ecosystem File

For anything beyond trivial setups, use a PM2 ecosystem file. This is a configuration file that lives in your project root:

// ecosystem.config.js
module.exports = {
  apps: [{
    name: "my-app",
    script: "app.js",
    instances: "max",
    exec_mode: "cluster",
    env: {
      NODE_ENV: "production",
      PORT: 3000
    },
    max_memory_restart: "512M",
    log_date_format: "YYYY-MM-DD HH:mm:ss Z",
    error_file: "/home/deploy/logs/app-error.log",
    out_file: "/home/deploy/logs/app-out.log",
    merge_logs: true
  }]
};

Start with the ecosystem file:

pm2 start ecosystem.config.js

PM2 Startup Script

PM2 needs to restart your application if the Droplet reboots. Generate a startup script:

pm2 startup systemd

# PM2 will output a command -- run it
sudo env PATH=$PATH:/home/deploy/.nvm/versions/node/v20.17.0/bin /home/deploy/.nvm/versions/node/v20.17.0/lib/node_modules/pm2/bin/pm2 startup systemd -u deploy --hp /home/deploy

# Save the current process list
pm2 save

Now your application survives reboots automatically.

PM2 Log Management

PM2 writes logs to ~/.pm2/logs/ by default. Without rotation, these files grow until your disk fills up. Install the log rotation module:

pm2 install pm2-logrotate

# Configure rotation
pm2 set pm2-logrotate:max_size 10M
pm2 set pm2-logrotate:retain 7
pm2 set pm2-logrotate:compress true

Nginx as a Reverse Proxy

Nginx sits in front of Node.js and handles SSL termination, static file serving, gzip compression, and connection buffering. Node.js should never face the internet directly.

Install Nginx

sudo apt update
sudo apt install nginx -y

Configuration

Create a site configuration:

sudo nano /etc/nginx/sites-available/my-app
upstream nodejs {
    server 127.0.0.1:3000;
    keepalive 64;
}

server {
    listen 80;
    server_name yourdomain.com www.yourdomain.com;

    # Redirect all HTTP to HTTPS
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name yourdomain.com www.yourdomain.com;

    # SSL certificates (managed by Certbot)
    ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;

    # SSL hardening
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;

    # Gzip compression
    gzip on;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml text/javascript image/svg+xml;
    gzip_min_length 1000;
    gzip_comp_level 6;
    gzip_vary on;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    # Static files -- served directly by Nginx, bypassing Node.js
    location /static/ {
        alias /home/deploy/my-app/static/;
        expires 30d;
        add_header Cache-Control "public, immutable";
        access_log off;
    }

    # Proxy to Node.js
    location / {
        proxy_pass http://nodejs;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
        proxy_read_timeout 90s;
    }
}

Enable the site and test the configuration:

# Enable the site
sudo ln -s /etc/nginx/sites-available/my-app /etc/nginx/sites-enabled/

# Remove the default site
sudo rm /etc/nginx/sites-enabled/default

# Test configuration
sudo nginx -t

# Reload
sudo systemctl reload nginx

SSL with Let's Encrypt

Install Certbot and obtain a free SSL certificate:

sudo apt install certbot python3-certbot-nginx -y

# Obtain certificate
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com

# Test automatic renewal
sudo certbot renew --dry-run

Certbot automatically configures a systemd timer for renewal. Certificates renew before they expire with zero intervention.

Caching Static Assets

The Nginx configuration above already sets expires 30d on static files. For cache-busting, append a version query string to your asset URLs in your templates:

// In your Express app
var crypto = require("crypto");
var fs = require("fs");

function assetHash(filePath) {
  var content = fs.readFileSync(filePath);
  return crypto.createHash("md5").update(content).digest("hex").substring(0, 8);
}

app.locals.cssVersion = assetHash("./static/css/styles.css");

Then in your template:

link(rel="stylesheet" href="/static/css/styles.css?v=" + cssVersion)

Monitoring Resource Usage

You cannot optimize what you do not measure. Set up monitoring on every production Droplet.

htop

htop is an interactive process viewer. Install it and use it to see CPU, memory, and process usage in real time:

sudo apt install htop -y
htop

Key things to look for: Are your Node.js processes pinned at 100% CPU? Is memory usage creeping up over time (possible leak)? Is swap being used heavily (time to upgrade the Droplet)?

PM2 Monitoring

PM2 has built-in monitoring:

# Real-time monitoring dashboard
pm2 monit

# Quick status check
pm2 status

# Detailed process info
pm2 describe my-app

The pm2 monit command shows a live dashboard with CPU, memory, loop delay, and logs for each process. I keep a terminal tab open with this running during deployments.

DigitalOcean Metrics Agent

DigitalOcean provides a metrics agent that feeds CPU, memory, disk, and bandwidth data into the control panel graphs and enables alert policies:

# Install the metrics agent
curl -sSL https://repos.insights.digitalocean.com/install.sh | sudo bash

Once installed, you get detailed graphs in the DigitalOcean console and can set up alert policies -- for example, get an email when CPU exceeds 80% for 5 minutes or disk usage exceeds 90%.


Swap Configuration

On memory-constrained Droplets (1-2 GB RAM), configuring swap prevents your application from being killed by the OOM (Out of Memory) killer during traffic spikes.

# Create a 2GB swap file
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

# Make it permanent
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

# Tune swappiness (lower = use swap less aggressively)
sudo sysctl vm.swappiness=10
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.conf

# Verify
free -h

Output:

               total        used        free      shared  buff/cache   available
Mem:           1.9Gi       512Mi       256Mi        12Mi       1.2Gi       1.3Gi
Swap:          2.0Gi          0B       2.0Gi

Swap is not a replacement for sufficient RAM. If your application consistently uses swap, you need to upgrade the Droplet. Swap is a safety net for occasional spikes.


Node.js Process Memory Limits

By default, Node.js limits the V8 heap to roughly 1.5 GB on 64-bit systems. On Droplets with 4 GB or more of RAM, you can increase this limit to let Node.js use more memory before triggering garbage collection pressure.

Set the limit in your PM2 ecosystem file:

// ecosystem.config.js
module.exports = {
  apps: [{
    name: "my-app",
    script: "app.js",
    instances: 2,
    exec_mode: "cluster",
    node_args: "--max-old-space-size=1024",
    max_memory_restart: "1200M",
    env: {
      NODE_ENV: "production",
      PORT: 3000
    }
  }]
};

The --max-old-space-size=1024 flag sets the V8 heap limit to 1024 MB per process. The max_memory_restart setting tells PM2 to restart a process if it exceeds 1200 MB, which acts as a safety valve against memory leaks.

Sizing rule: Take your total Droplet RAM, subtract 512 MB for the OS and Nginx, then divide by the number of PM2 instances. That is your per-process budget.

4 GB Droplet, 2 instances:
(4096 - 512) / 2 = 1792 MB per process
Set --max-old-space-size=1536, max_memory_restart=1700M

Log Rotation with logrotate

Beyond PM2's log rotation, your system logs and Nginx logs also need rotation. Ubuntu ships with logrotate -- you just need to configure it for your application logs if they live outside PM2's default path.

sudo nano /etc/logrotate.d/my-app
/home/deploy/logs/*.log {
    daily
    missingok
    rotate 14
    compress
    delaycompress
    notifempty
    create 0640 deploy deploy
    sharedscripts
    postrotate
        pm2 reloadLogs
    endscript
}

This rotates logs daily, keeps 14 days of history, compresses old logs, and tells PM2 to reopen its log file handles after rotation.

Test it:

sudo logrotate -d /etc/logrotate.d/my-app

Automated Deployments

You do not need a full CI/CD pipeline to deploy reliably. Two simple strategies work well for Droplet deployments.

Strategy 1: Git Pull

The simplest approach. Your Droplet has a clone of your repository. On deploy, you pull the latest code and restart.

#!/bin/bash
# deploy.sh -- run on the Droplet or via SSH
set -e

APP_DIR="/home/deploy/my-app"

cd $APP_DIR

echo "Pulling latest code..."
git pull origin master

echo "Installing dependencies..."
npm ci --production

echo "Reloading application..."
pm2 reload my-app

echo "Deployment complete."

Run it remotely from your local machine:

ssh deploy@your_droplet_ip 'bash -s' < deploy.sh

Strategy 2: rsync

rsync gives you more control. It only transfers changed files and does not require git on the server:

#!/bin/bash
# deploy-rsync.sh -- run from your local machine
set -e

SERVER="deploy@your_droplet_ip"
APP_DIR="/home/deploy/my-app"

echo "Syncing files..."
rsync -avz --delete \
    --exclude 'node_modules' \
    --exclude '.git' \
    --exclude '.env' \
    ./ $SERVER:$APP_DIR/

echo "Installing dependencies..."
ssh $SERVER "cd $APP_DIR && npm ci --production"

echo "Reloading application..."
ssh $SERVER "pm2 reload my-app"

echo "Deployment complete."

The --delete flag removes files on the server that no longer exist locally. The --exclude flags prevent syncing things that should not be transferred.

I prefer rsync because it does not require git to be installed on the production server, it does not leave the .git directory on the server (reducing the attack surface), and it works with any build pipeline since it just transfers files.


Backup Strategies

Droplet Snapshots

Snapshots are point-in-time images of your entire Droplet. They capture everything: OS, configuration, application code, data. Take a snapshot before major changes:

# Using doctl CLI
doctl compute droplet-action snapshot <droplet-id> --snapshot-name "pre-deploy-2026-02-08"

Snapshots cost $0.06/GB/month. A 25 GB Droplet snapshot costs about $1.50/month to store.

Automated Backups

DigitalOcean offers automated weekly backups for 20% of the Droplet cost. For a $24/month Droplet, that is $4.80/month for weekly backups with 4-week retention. Enable them in the Droplet settings.

Application-Level Backups

Do not rely solely on Droplet backups for your data. If your app uses a local database (SQLite, files on disk), set up a cron job to back up the data separately:

# Crontab entry: back up data daily at 2 AM
0 2 * * * tar -czf /home/deploy/backups/data-$(date +\%Y\%m\%d).tar.gz /home/deploy/my-app/data/

Snapshots for Dev Environments

A cost-saving trick: create a snapshot of your development/staging Droplet, destroy the Droplet, and recreate it from the snapshot when you need it. You only pay for snapshot storage ($0.06/GB/month) instead of the Droplet cost when it is not in use.


Security Hardening

A production server facing the internet needs more than a firewall and SSH keys.

fail2ban

fail2ban monitors log files and bans IP addresses that show malicious behavior (brute-force SSH attempts, repeated 404s, etc.):

sudo apt install fail2ban -y

# Create local configuration
sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
sudo nano /etc/fail2ban/jail.local

Key settings in jail.local:

[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 5

[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3

[nginx-http-auth]
enabled = true

[nginx-botsearch]
enabled = true
sudo systemctl enable fail2ban
sudo systemctl start fail2ban

# Check banned IPs
sudo fail2ban-client status sshd

Output:

Status for the jail: sshd
|- Filter
|  |- Currently failed: 2
|  |- Total failed:     47
|  `- File list:        /var/log/auth.log
`- Actions
   |- Currently banned: 3
   |- Total banned:     12
   `- Banned IP list:   203.0.113.5 198.51.100.12 192.0.2.88

Automatic Security Updates

Configure unattended upgrades to automatically install security patches:

sudo apt install unattended-upgrades -y
sudo dpkg-reconfigure --priority=low unattended-upgrades

Edit /etc/apt/apt.conf.d/50unattended-upgrades to configure email notifications:

Unattended-Upgrade::Mail "[email protected]";
Unattended-Upgrade::MailReport "on-change";

Remove Unnecessary Services

A minimal server surface means fewer attack vectors:

# List running services
sudo systemctl list-units --type=service --state=running

# Disable services you do not need
sudo systemctl disable --now snapd
sudo systemctl disable --now cups
sudo systemctl disable --now avahi-daemon

Rate Limiting in Nginx

Protect your Node.js app from being overwhelmed:

# Add to the http block in /etc/nginx/nginx.conf
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;

# Add to the location block in your site config
location /api/ {
    limit_req zone=api burst=20 nodelay;
    proxy_pass http://nodejs;
    # ... other proxy settings
}

This allows 10 requests per second per IP with a burst of 20. Requests beyond that get a 503 response.


Cost Optimization

Right-Sizing

Monitor your resource usage for at least a week before deciding on a plan. If your Droplet is consistently using less than 50% of its CPU and RAM, downsize. DigitalOcean makes it easy to resize Droplets, though you can only resize up for disk -- if you need to shrink the disk, you need to create a new Droplet from a snapshot.

Reserved IPs

If you need a static IP that persists across Droplet rebuilds, use a Reserved IP. They are free when attached to a Droplet and cost $5/month when unattached. Attach one to your production Droplet so you can rebuild or replace the Droplet without changing DNS records.

# Assign a reserved IP
doctl compute reserved-ip-action assign <reserved-ip> <droplet-id>

Destroy and Recreate Development Environments

As mentioned in the backup section, snapshot your dev/staging Droplets and destroy them when not in use. A developer who only works 8 hours a day can save 67% by destroying the Droplet outside working hours. Even better, script the creation and destruction:

#!/bin/bash
# spin-up-dev.sh
doctl compute droplet create dev-server \
    --image <snapshot-id> \
    --size s-2vcpu-4gb \
    --region nyc3 \
    --ssh-keys <key-fingerprint> \
    --wait

echo "Dev server is ready."
#!/bin/bash
# tear-down-dev.sh
DROPLET_ID=$(doctl compute droplet list --format ID,Name --no-header | grep dev-server | awk '{print $1}')

# Snapshot before destroying
doctl compute droplet-action snapshot $DROPLET_ID --snapshot-name "dev-$(date +%Y%m%d-%H%M)" --wait

# Destroy
doctl compute droplet delete $DROPLET_ID --force

echo "Dev server destroyed. Snapshot saved."

Complete Working Example: Full Deployment Script

This script takes a fresh Ubuntu 22.04 Droplet and sets up a production-ready Node.js environment. Run it as the root user on a brand new Droplet.

#!/bin/bash
# setup-production.sh
# Usage: ssh root@your_droplet_ip 'bash -s' < setup-production.sh
set -euo pipefail

DOMAIN="yourdomain.com"
APP_NAME="my-app"
APP_PORT=3000
DEPLOY_USER="deploy"
APP_DIR="/home/$DEPLOY_USER/$APP_NAME"
NODE_VERSION="20"

echo "===== Step 1: System Update ====="
apt update && apt upgrade -y

echo "===== Step 2: Create Deploy User ====="
adduser --disabled-password --gecos "" $DEPLOY_USER
usermod -aG sudo $DEPLOY_USER
echo "$DEPLOY_USER ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers.d/$DEPLOY_USER

# Copy SSH keys
rsync --archive --chown=$DEPLOY_USER:$DEPLOY_USER ~/.ssh /home/$DEPLOY_USER

echo "===== Step 3: Harden SSH ====="
sed -i 's/^#*PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
sed -i 's/^#*PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
systemctl restart sshd

echo "===== Step 4: Configure Firewall ====="
ufw allow OpenSSH
ufw allow 'Nginx Full'
ufw --force enable

echo "===== Step 5: Install Node.js via nvm ====="
su - $DEPLOY_USER -c "
    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
    export NVM_DIR=\"\$HOME/.nvm\"
    [ -s \"\$NVM_DIR/nvm.sh\" ] && . \"\$NVM_DIR/nvm.sh\"
    nvm install $NODE_VERSION
    nvm alias default $NODE_VERSION
    npm install -g pm2
"

echo "===== Step 6: Install Nginx ====="
apt install nginx -y

cat > /etc/nginx/sites-available/$APP_NAME << 'NGINX'
upstream nodejs {
    server 127.0.0.1:APP_PORT_PLACEHOLDER;
    keepalive 64;
}

server {
    listen 80;
    server_name DOMAIN_PLACEHOLDER www.DOMAIN_PLACEHOLDER;

    location /.well-known/acme-challenge/ {
        root /var/www/html;
    }

    location / {
        proxy_pass http://nodejs;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
    }
}
NGINX

# Replace placeholders
sed -i "s/APP_PORT_PLACEHOLDER/$APP_PORT/g" /etc/nginx/sites-available/$APP_NAME
sed -i "s/DOMAIN_PLACEHOLDER/$DOMAIN/g" /etc/nginx/sites-available/$APP_NAME

ln -sf /etc/nginx/sites-available/$APP_NAME /etc/nginx/sites-enabled/
rm -f /etc/nginx/sites-enabled/default
nginx -t && systemctl reload nginx

echo "===== Step 7: SSL with Let's Encrypt ====="
apt install certbot python3-certbot-nginx -y
certbot --nginx -d $DOMAIN -d www.$DOMAIN --non-interactive --agree-tos -m admin@$DOMAIN

echo "===== Step 8: Configure Swap ====="
fallocate -l 2G /swapfile
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
echo '/swapfile none swap sw 0 0' >> /etc/fstab
sysctl vm.swappiness=10
echo 'vm.swappiness=10' >> /etc/sysctl.conf

echo "===== Step 9: Install Security Tools ====="
apt install fail2ban unattended-upgrades -y

cat > /etc/fail2ban/jail.local << 'F2B'
[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 5

[sshd]
enabled = true
maxretry = 3
F2B

systemctl enable fail2ban
systemctl start fail2ban

echo "===== Step 10: Install Monitoring Agent ====="
curl -sSL https://repos.insights.digitalocean.com/install.sh | bash

echo "===== Step 11: Create Application Directory ====="
su - $DEPLOY_USER -c "
    mkdir -p $APP_DIR
    mkdir -p /home/$DEPLOY_USER/logs
    mkdir -p /home/$DEPLOY_USER/backups
"

echo "===== Step 12: Configure Log Rotation ====="
cat > /etc/logrotate.d/$APP_NAME << 'LOGROTATE'
/home/deploy/logs/*.log {
    daily
    missingok
    rotate 14
    compress
    delaycompress
    notifempty
    create 0640 deploy deploy
    sharedscripts
    postrotate
        su - deploy -c "pm2 reloadLogs" > /dev/null 2>&1
    endscript
}
LOGROTATE

echo "===== Step 13: Create PM2 Ecosystem File ====="
su - $DEPLOY_USER -c "
cat > $APP_DIR/ecosystem.config.js << 'PM2CONFIG'
module.exports = {
  apps: [{
    name: \"$APP_NAME\",
    script: \"app.js\",
    instances: \"max\",
    exec_mode: \"cluster\",
    node_args: \"--max-old-space-size=1024\",
    max_memory_restart: \"1200M\",
    env: {
      NODE_ENV: \"production\",
      PORT: $APP_PORT
    },
    log_date_format: \"YYYY-MM-DD HH:mm:ss Z\",
    error_file: \"/home/$DEPLOY_USER/logs/app-error.log\",
    out_file: \"/home/$DEPLOY_USER/logs/app-out.log\",
    merge_logs: true
  }]
};
PM2CONFIG
"

echo ""
echo "========================================"
echo "  Production setup complete!"
echo "========================================"
echo ""
echo "Next steps:"
echo "  1. Deploy your app to $APP_DIR"
echo "  2. Run: ssh $DEPLOY_USER@$DOMAIN"
echo "  3. cd $APP_DIR && npm ci --production"
echo "  4. pm2 start ecosystem.config.js"
echo "  5. pm2 startup systemd && pm2 save"
echo ""

Save this script and run it against a fresh Droplet. In about 5 minutes you have a production-ready environment with SSL, firewall, monitoring, security hardening, log rotation, and PM2 process management.


Common Issues & Troubleshooting

1. EACCES Permission Denied on Port 80

Error: listen EACCES: permission denied 0.0.0.0:80

Cause: You are trying to bind Node.js directly to port 80, which requires root privileges.

Fix: Never bind Node.js to ports below 1024. Use Nginx as a reverse proxy on port 80/443 and have Node.js listen on a high port (3000, 8080, etc.). This is the entire reason we set up Nginx in this guide.

2. PM2 Cluster Mode Causes Shared State Issues

Error: Session store loses sessions between requests
TypeError: Cannot read property 'userId' of undefined

Cause: In cluster mode, each worker is a separate process with its own memory space. If you store sessions in memory (the default express-session store), each worker has different session data. Requests hitting different workers will appear to have no session.

Fix: Use an external session store that all workers can access:

var session = require("express-session");
var MongoStore = require("connect-mongo");

app.use(session({
  secret: process.env.SESSION_SECRET,
  resave: false,
  saveUninitialized: false,
  store: MongoStore.create({
    mongoUrl: process.env.MONGODB_URI
  })
}));

3. Nginx 502 Bad Gateway

502 Bad Gateway
nginx/1.18.0 (Ubuntu)

Check the Nginx error log:

sudo tail -20 /var/log/nginx/error.log
connect() failed (111: Connection refused) while connecting to upstream

Cause: Nginx cannot reach Node.js. Either the app is not running, it is listening on a different port, or PM2 has not started yet.

Fix:

# Check if the app is running
pm2 status

# Check what port it is listening on
ss -tlnp | grep node

# Restart if needed
pm2 restart my-app

4. Out of Memory: Node.js Process Killed

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

# or in dmesg:
Out of memory: Killed process 12847 (node) total-vm:1856432kB

Cause: Your Node.js process exceeded the available memory. The Linux OOM killer terminated it.

Fix: A multi-pronged approach:

  1. Add swap space (see Swap Configuration section above)
  2. Set --max-old-space-size appropriately for your Droplet size
  3. Set max_memory_restart in PM2 to catch leaks before OOM
  4. Profile your application for memory leaks using node --inspect and Chrome DevTools
  5. Consider upgrading to a larger Droplet if legitimate memory usage exceeds capacity

5. Let's Encrypt Certificate Renewal Fails

Certbot failed to authenticate some domains (authenticator: nginx).
The following errors were reported by the server:
Domain: yourdomain.com
Type:   unauthorized
Detail: Invalid response from http://yourdomain.com/.well-known/acme-challenge/...

Cause: The ACME challenge cannot reach your server. Common reasons: DNS not pointed correctly, firewall blocking port 80, Nginx not serving the .well-known directory.

Fix:

# Verify DNS
dig yourdomain.com +short

# Verify port 80 is open
sudo ufw status | grep 80

# Verify Nginx is serving the challenge directory
curl -I http://yourdomain.com/.well-known/acme-challenge/test

# Try renewal with verbose output
sudo certbot renew --dry-run -v

Best Practices

  • Never run Node.js as root. Create a dedicated deploy user with limited sudo access. Root compromise of your Node.js process means root compromise of your entire server.

  • Always use PM2 cluster mode on multi-core Droplets. A single Node.js process can only use one CPU core. On a 2-vCPU Droplet, a single process wastes 50% of your compute capacity. Cluster mode fixes this with zero code changes.

  • Let Nginx serve static files. Nginx is dramatically faster at serving static assets than Node.js. Every request for a CSS file, image, or JavaScript bundle that hits Node.js is wasted application capacity. Offload all of /static/ to Nginx with long cache headers.

  • Set max_memory_restart in PM2 slightly above your expected peak. This acts as a circuit breaker for memory leaks. A slow leak that would eventually crash your server and require manual intervention instead triggers an automatic restart with minimal downtime.

  • Monitor before you optimize. Install the DigitalOcean metrics agent on day one. Let it collect data for at least a week before making sizing decisions. I have seen engineers throw money at 8-vCPU Droplets for applications that never exceed 15% CPU on a 2-vCPU plan.

  • Use Reserved IPs for production. When you need to rebuild or replace your Droplet, a Reserved IP lets you do so without touching DNS records. The IP moves to the new Droplet and traffic flows immediately.

  • Enable automated security updates. An unpatched server is a compromised server. Unattended upgrades handle security patches automatically. The rare case where a patch breaks something is far less costly than a security breach.

  • Test your deployment script on a throwaway Droplet. Before running your setup script against production, spin up a $6 Droplet, run the script, verify everything works, then destroy it. This costs less than a dollar and saves you from debugging a broken production server.

  • Use npm ci instead of npm install in production. npm ci installs exact versions from package-lock.json and deletes node_modules first, ensuring a clean, reproducible install. npm install can silently upgrade minor versions and mutate your lock file.


References

Powered by Contentful