Digitalocean

DigitalOcean Load Balancers for High Availability

A practical guide to DigitalOcean Load Balancers covering setup, health checks, SSL termination, sticky sessions, and high-availability patterns for Node.js applications.

DigitalOcean Load Balancers for High Availability

A single server is a single point of failure. If it crashes, runs out of memory, or needs a security patch, your application goes offline. Load balancers solve this by distributing traffic across multiple servers. If one server fails, the load balancer routes traffic to the remaining healthy servers while the failed one recovers.

DigitalOcean Load Balancers sit between the internet and your Droplets. They accept incoming connections, check which backend servers are healthy, and forward requests to them. They handle SSL termination, support sticky sessions, and integrate with DigitalOcean's firewall and VPC networking. This guide covers setting up load balancers for Node.js applications and configuring them for production reliability.

Prerequisites

  • A DigitalOcean account
  • Two or more Droplets running your Node.js application
  • A domain name (for SSL)
  • doctl CLI installed (optional)

How Load Balancers Work

A load balancer receives all incoming traffic on a public IP address. It maintains a list of backend Droplets and distributes requests among them using one of several algorithms. Before forwarding a request, it checks whether the target Droplet is healthy by running periodic health checks.

The flow:

  1. Client sends a request to the load balancer's IP (or domain)
  2. Load balancer selects a healthy backend Droplet
  3. Load balancer forwards the request to that Droplet
  4. Droplet processes the request and responds to the load balancer
  5. Load balancer forwards the response to the client

The client never communicates directly with your Droplets. This provides security (Droplet IPs are not exposed), flexibility (add or remove Droplets without DNS changes), and reliability (automatic failover).

Creating a Load Balancer

Via Dashboard

  1. Navigate to Networking > Load Balancers
  2. Click Create Load Balancer
  3. Choose a datacenter region (same region as your Droplets)
  4. Select a size:
    • Small ($12/month) — 10,000 simultaneous connections
    • Medium ($48/month) — 50,000 simultaneous connections
    • Large ($96/month) — 100,000+ simultaneous connections
  5. Configure forwarding rules (HTTP port 80 to port 3000)
  6. Add your Droplets by name or tag
  7. Click Create Load Balancer

Via CLI

# Create a load balancer
doctl compute load-balancer create \
  --name my-lb \
  --region nyc3 \
  --size lb-small \
  --forwarding-rules "entry_protocol:http,entry_port:80,target_protocol:http,target_port:3000" \
  --health-check "protocol:http,port:3000,path:/health,check_interval_seconds:10,response_timeout_seconds:5,unhealthy_threshold:3,healthy_threshold:5"

Adding Droplets

# Add Droplets by ID
doctl compute load-balancer add-droplets YOUR_LB_ID \
  --droplet-ids 123456,789012

# Or use tags — any Droplet with this tag is automatically added
doctl compute load-balancer update YOUR_LB_ID \
  --tag-name web-server

Using tags is the recommended approach. When you create a new Droplet and tag it web-server, the load balancer automatically includes it. When you destroy a Droplet, it is automatically removed.

Preparing Your Node.js Application

Health Check Endpoint

The load balancer needs an endpoint to verify your application is running:

// app.js
var express = require("express");
var app = express();

app.get("/health", function(req, res) {
  res.status(200).json({
    status: "healthy",
    uptime: process.uptime(),
    timestamp: Date.now()
  });
});

A simple 200 response is sufficient. The load balancer only checks the HTTP status code. For more thorough checks, verify database connectivity:

var db = require("./db");

app.get("/health", function(req, res) {
  db.query("SELECT 1")
    .then(function() {
      res.status(200).json({ status: "healthy" });
    })
    .catch(function(err) {
      console.error("Health check failed:", err.message);
      res.status(503).json({ status: "unhealthy", error: err.message });
    });
});

If the database is down, the health check returns 503, and the load balancer stops sending traffic to that Droplet.

Handling Proxy Headers

When requests pass through a load balancer, the client's real IP address is in the X-Forwarded-For header, not req.ip. Configure Express to trust the proxy:

// app.js
var app = express();

// Trust the load balancer proxy
app.set("trust proxy", true);

// Now req.ip returns the client's real IP
app.use(function(req, res, next) {
  console.log("Client IP:", req.ip);
  console.log("Protocol:", req.protocol); // "https" when X-Forwarded-Proto is set
  next();
});

The trust proxy setting tells Express to use X-Forwarded-For for req.ip and X-Forwarded-Proto for req.protocol.

Stateless Application Design

With multiple backend servers, any server might handle any request. Your application must be stateless — no storing data in memory that other instances need.

// BAD — session data stored in memory
var sessions = {};
app.use(function(req, res, next) {
  var sessionId = req.cookies.sid;
  req.session = sessions[sessionId] || {};
  next();
});

// GOOD — session data stored in a shared database or Redis
var session = require("express-session");
var RedisStore = require("connect-redis").default;
var redis = require("redis");

var redisClient = redis.createClient({
  url: process.env.REDIS_URL
});
redisClient.connect();

app.use(session({
  store: new RedisStore({ client: redisClient }),
  secret: process.env.SESSION_SECRET,
  resave: false,
  saveUninitialized: false,
  cookie: { secure: true }
}));

Shared state — sessions, file uploads, caches — must live in a database, Redis, or object storage accessible to all instances.

Forwarding Rules

Forwarding rules define how the load balancer routes traffic.

HTTP Only

doctl compute load-balancer update YOUR_LB_ID \
  --forwarding-rules "entry_protocol:http,entry_port:80,target_protocol:http,target_port:3000"

HTTP to HTTPS Redirect + SSL Termination

doctl compute load-balancer update YOUR_LB_ID \
  --forwarding-rules "entry_protocol:http,entry_port:80,target_protocol:http,target_port:3000 entry_protocol:https,entry_port:443,target_protocol:http,target_port:3000,certificate_id:YOUR_CERT_ID" \
  --redirect-http-to-https

With SSL termination, the load balancer handles HTTPS and forwards plain HTTP to your Droplets. Your Node.js application does not need SSL configuration.

Adding SSL Certificates

# Let DigitalOcean manage the certificate (recommended)
doctl compute certificate create \
  --name my-cert \
  --type lets_encrypt \
  --dns-names myapp.example.com,www.myapp.example.com

# Or upload your own certificate
doctl compute certificate create \
  --name my-cert \
  --type custom \
  --private-key-path ./privkey.pem \
  --leaf-certificate-path ./cert.pem \
  --certificate-chain-path ./chain.pem

DigitalOcean automatically renews Let's Encrypt certificates. This is the simplest SSL setup — no Certbot or Nginx configuration on your Droplets.

Health Checks

Configuration

doctl compute load-balancer update YOUR_LB_ID \
  --health-check "protocol:http,port:3000,path:/health,check_interval_seconds:10,response_timeout_seconds:5,unhealthy_threshold:3,healthy_threshold:5"

Parameters:

  • protocol — HTTP or TCP
  • port — the port your application listens on
  • path — the endpoint to check (HTTP only)
  • check_interval_seconds — how often to check (default 10)
  • response_timeout_seconds — how long to wait for a response (default 5)
  • unhealthy_threshold — consecutive failures before marking unhealthy (default 3)
  • healthy_threshold — consecutive successes before marking healthy again (default 5)

How Failover Works

With the default settings:

  1. Load balancer checks /health every 10 seconds
  2. If a Droplet fails to respond 3 times in a row (30 seconds), it is marked unhealthy
  3. Traffic stops flowing to the unhealthy Droplet
  4. When the Droplet passes 5 consecutive checks (50 seconds), it is marked healthy again
  5. Traffic resumes to the recovered Droplet

During failover, existing connections to the unhealthy Droplet may be interrupted. New connections are immediately routed to healthy Droplets.

Graceful Shutdown

When deploying updates, shut down gracefully so the load balancer has time to stop routing traffic:

// server.js
var http = require("http");
var app = require("./app");

var server = http.createServer(app);
var port = process.env.PORT || 3000;

server.listen(port, function() {
  console.log("Server listening on port " + port);
});

// Track active connections
var connections = {};
var nextId = 0;

server.on("connection", function(conn) {
  var id = nextId++;
  connections[id] = conn;
  conn.on("close", function() {
    delete connections[id];
  });
});

function shutdown() {
  console.log("Shutting down gracefully...");

  // Stop accepting new connections
  server.close(function() {
    console.log("Server closed");
    process.exit(0);
  });

  // Close idle keep-alive connections
  Object.keys(connections).forEach(function(id) {
    connections[id].end();
  });

  // Force close after 30 seconds
  setTimeout(function() {
    console.error("Forcing shutdown after timeout");
    process.exit(1);
  }, 30000);
}

process.on("SIGTERM", shutdown);
process.on("SIGINT", shutdown);

When PM2 sends SIGTERM during a reload, this code stops accepting new connections, finishes in-progress requests, and shuts down cleanly. The load balancer detects the failed health check and routes new traffic elsewhere.

Load Balancing Algorithms

Round Robin (Default)

Distributes requests evenly across all healthy Droplets in order. Simple and effective when all Droplets have similar resources.

Least Connections

Routes each request to the Droplet with the fewest active connections. Better for workloads where request processing time varies.

doctl compute load-balancer update YOUR_LB_ID \
  --algorithm least_connections

Choose least connections when:

  • Some requests take much longer than others (file uploads, report generation)
  • Droplets have different resource allocations
  • WebSocket connections keep connections open

Choose round robin when:

  • Requests have similar processing times
  • All Droplets are the same size
  • Stateless API endpoints

Sticky Sessions

Sticky sessions route all requests from the same client to the same backend Droplet. Use them when your application stores temporary state in memory that must persist across requests.

Enabling Sticky Sessions

doctl compute load-balancer update YOUR_LB_ID \
  --sticky-sessions "type:cookies,cookie_name:DO-LB-COOKIE,cookie_ttl_seconds:300"

The load balancer sets a cookie (DO-LB-COOKIE) containing the target Droplet identifier. Subsequent requests with this cookie are routed to the same Droplet.

When to Use Sticky Sessions

  • WebSocket connections — the initial HTTP upgrade and subsequent frames must reach the same server
  • Multi-step forms — when form state is stored in server memory between steps
  • File upload processing — when uploaded files are processed in stages on the same server

When to Avoid Sticky Sessions

  • Stateless APIs — no benefit; round robin is more balanced
  • Session data in Redis/database — any server can handle any request
  • Uneven load distribution — sticky sessions can cause hotspots where one Droplet handles more traffic than others

High Availability Patterns

Multi-Droplet Setup

The minimum high-availability setup uses two Droplets:

# Create two identical Droplets
doctl compute droplet create web-1 web-2 \
  --image ubuntu-22-04-x64 \
  --size s-1vcpu-2gb \
  --region nyc3 \
  --ssh-keys YOUR_KEY_ID \
  --tag-name web-server

# Create load balancer targeting the tag
doctl compute load-balancer create \
  --name web-lb \
  --region nyc3 \
  --size lb-small \
  --tag-name web-server \
  --forwarding-rules "entry_protocol:http,entry_port:80,target_protocol:http,target_port:3000" \
  --health-check "protocol:http,port:3000,path:/health,check_interval_seconds:10,response_timeout_seconds:5,unhealthy_threshold:3,healthy_threshold:5"

Rolling Deployments

Deploy updates to one Droplet at a time so the other continues serving traffic:

#!/bin/bash
# rolling-deploy.sh

DROPLETS="web-1 web-2"

for DROPLET in $DROPLETS; do
  echo "Deploying to $DROPLET..."

  # Deploy to this Droplet
  ssh deploy@$DROPLET "cd /var/www/myapp && git pull origin main && npm install --production && pm2 reload myapp"

  echo "Waiting for $DROPLET to pass health checks..."
  sleep 60  # Wait for health check to confirm healthy

  echo "$DROPLET deployed successfully"
done

echo "Rolling deployment complete"

With two Droplets:

  1. Deploy to web-1 — PM2 restarts the app
  2. Load balancer detects web-1 health check failure, routes all traffic to web-2
  3. Web-1 passes health checks, traffic is balanced again
  4. Deploy to web-2 — same process
  5. Both Droplets running the new version

Blue-Green Deployment

For zero-risk deployments, run two complete environments:

# Blue environment (current production)
doctl compute droplet create blue-1 blue-2 \
  --tag-name blue-env \
  --image ubuntu-22-04-x64 \
  --size s-1vcpu-2gb \
  --region nyc3

# Green environment (new version)
doctl compute droplet create green-1 green-2 \
  --tag-name green-env \
  --image ubuntu-22-04-x64 \
  --size s-1vcpu-2gb \
  --region nyc3

# Deploy and test on green environment
# Then switch the load balancer
doctl compute load-balancer update YOUR_LB_ID \
  --tag-name green-env

# If something goes wrong, switch back
doctl compute load-balancer update YOUR_LB_ID \
  --tag-name blue-env

Switching the tag takes effect immediately. All new connections go to the green environment. Rollback is instant.

VPC and Network Security

Private Networking

Place your Droplets and load balancer in a VPC (Virtual Private Cloud):

# Create a VPC
doctl vpcs create \
  --name my-vpc \
  --region nyc3 \
  --ip-range 10.10.10.0/24

# Create Droplets in the VPC
doctl compute droplet create web-1 \
  --image ubuntu-22-04-x64 \
  --size s-1vcpu-2gb \
  --region nyc3 \
  --vpc-uuid YOUR_VPC_ID

Firewall Rules

Restrict Droplet access so only the load balancer can reach them:

# Create a firewall that only allows traffic from the load balancer
doctl compute firewall create \
  --name web-firewall \
  --inbound-rules "protocol:tcp,ports:3000,load_balancer_uids:YOUR_LB_ID protocol:tcp,ports:22,address:YOUR_IP/32" \
  --outbound-rules "protocol:tcp,ports:all,address:0.0.0.0/0 protocol:udp,ports:all,address:0.0.0.0/0" \
  --droplet-ids YOUR_DROPLET_IDS

This configuration:

  • Allows the load balancer to reach port 3000 on your Droplets
  • Allows SSH from your IP only
  • Blocks all other inbound traffic
  • Allows all outbound traffic (for npm install, API calls, etc.)

Monitoring Load Balancer Performance

Dashboard Metrics

DigitalOcean provides load balancer metrics in the dashboard:

  • Request rate — requests per second across all backends
  • Bandwidth — inbound and outbound data transfer
  • HTTP response codes — 2xx, 3xx, 4xx, 5xx distribution
  • Backend health — number of healthy vs unhealthy Droplets
  • Connection count — active connections to the load balancer

Application-Level Metrics

Track which backend is handling each request:

app.use(function(req, res, next) {
  var start = Date.now();
  var hostname = require("os").hostname();

  res.on("finish", function() {
    console.log(JSON.stringify({
      server: hostname,
      method: req.method,
      path: req.path,
      status: res.statusCode,
      duration: Date.now() - start,
      clientIp: req.ip,
      timestamp: new Date().toISOString()
    }));
  });

  next();
});

This logs the hostname of the Droplet that handled each request, making it easy to verify traffic distribution and identify slow instances.

Common Issues and Troubleshooting

Load balancer returns 502 Bad Gateway

No healthy backend Droplets are available:

Fix: Check that your application is running on all Droplets. Verify the health check endpoint returns 200. Ensure the health check port matches the port your application listens on. Check firewall rules allow the load balancer to reach your Droplets.

Uneven traffic distribution

One Droplet handles significantly more requests than others:

Fix: If using sticky sessions, some clients generate more requests than others. Switch to round robin without sticky sessions if possible. Check that all Droplets are passing health checks — a flapping Droplet (alternating healthy/unhealthy) can cause uneven distribution.

SSL certificate errors

The load balancer cannot serve HTTPS:

Fix: Verify the certificate is attached to the load balancer's forwarding rules. For Let's Encrypt certificates, ensure DNS records point to the load balancer IP, not individual Droplets. Check certificate expiration — DigitalOcean auto-renews, but DNS changes can break renewal.

WebSocket connections drop

WebSocket upgrades fail through the load balancer:

Fix: Use sticky sessions so the WebSocket handshake and subsequent frames reach the same backend. Increase the idle timeout on the load balancer. Implement reconnection logic in the client.

Health check passes but application is broken

The /health endpoint returns 200 but the application is not functioning:

Fix: Make the health check more thorough — verify database connectivity, check critical dependencies, test a representative operation. Return 503 when any critical dependency is unavailable.

Best Practices

  • Use tags instead of Droplet IDs. Tags make scaling automatic. New Droplets with the correct tag are added to the load balancer immediately.
  • Place the load balancer in the same region as your Droplets. Cross-region load balancing adds latency. Each region should have its own load balancer.
  • Terminate SSL at the load balancer. This simplifies Droplet configuration — no certificate management or renewal on individual servers.
  • Make your application stateless. Store sessions, caches, and uploads in shared storage. Any Droplet should be able to handle any request.
  • Use health checks that test real functionality. A simple HTTP 200 confirms the process is running, but checking database connectivity confirms the application can actually serve requests.
  • Deploy with rolling updates. Never update all Droplets simultaneously. Deploy to one at a time and verify health before proceeding.
  • Restrict Droplet access via firewall. Only the load balancer should reach your application port. Direct access to Droplets bypasses SSL and load balancing.
  • Monitor response code distribution. A spike in 5xx responses from a specific backend indicates a problem on that Droplet, not a load balancer issue.

References

Powered by Contentful