Nodejs

Production Deployment Checklist for Node.js

A comprehensive production deployment checklist for Node.js applications covering process management, reverse proxy, logging, monitoring, security, and zero-downtime deployments.

Production Deployment Checklist for Node.js

Overview

Shipping a Node.js application to production is fundamentally different from running it on your laptop. Production demands process supervision, proper logging, reverse proxying, SSL termination, graceful shutdown handling, health checks, monitoring, and a clear rollback plan. This article is the checklist I wish I had ten years ago — every item earned through outages, late-night debugging sessions, and hard lessons about what breaks when real traffic hits your server.

Prerequisites

  • Node.js v18+ installed on your target server
  • Basic familiarity with Express.js
  • A Linux server (Ubuntu 22.04 or similar) with SSH access
  • A registered domain name with DNS pointed to your server
  • Familiarity with the command line and basic Nginx configuration

1. Environment Configuration

Set NODE_ENV to Production

This single variable changes how Express, Pug, and dozens of npm packages behave. Express enables view caching, disables verbose error output, and optimizes template compilation. Many libraries reduce logging verbosity and enable internal caches.

export NODE_ENV=production

Never rely on setting this inside your code. Set it in your process manager configuration or your deployment environment. If NODE_ENV is not set, Express defaults to development mode, which leaks stack traces to clients and skips template caching.

Environment Variable Management

Store configuration in environment variables, not in code. Use a .env file for local development and inject real values through your deployment platform or process manager.

# .env.example — commit this, never commit .env
PORT=3000
NODE_ENV=production
DB_MONGO=mongodb://localhost:27017/myapp
SESSION_SECRET=change-me-in-production
REDIS_URL=redis://localhost:6379
SENTRY_DSN=https://[email protected]/0

Load environment variables early, before any other module initializes:

require("dotenv").config();

var express = require("express");
var app = express();
var port = process.env.PORT || 3000;

Validate required variables at startup. Do not let your app start if critical configuration is missing:

var requiredVars = ["DB_MONGO", "SESSION_SECRET", "SENTRY_DSN"];

requiredVars.forEach(function(varName) {
  if (!process.env[varName]) {
    console.error("FATAL: Missing required environment variable: " + varName);
    process.exit(1);
  }
});

2. Process Management

Node.js runs as a single process by default. If it crashes, your app goes down with no automatic recovery. You need a process manager.

PM2

PM2 is the most widely used process manager for Node.js in production. It handles automatic restarts, cluster mode, log management, and zero-downtime reloads.

npm install -g pm2

Create an ecosystem file for repeatable configuration:

// ecosystem.config.js
module.exports = {
  apps: [{
    name: "myapp",
    script: "./app.js",
    instances: "max",
    exec_mode: "cluster",
    env_production: {
      NODE_ENV: "production",
      PORT: 3000
    },
    max_memory_restart: "512M",
    log_date_format: "YYYY-MM-DD HH:mm:ss Z",
    error_file: "/var/log/myapp/error.log",
    out_file: "/var/log/myapp/out.log",
    merge_logs: true,
    kill_timeout: 5000,
    listen_timeout: 10000,
    wait_ready: true
  }]
};

Start your application:

pm2 start ecosystem.config.js --env production
pm2 save
pm2 startup

The pm2 startup command generates a systemd service so PM2 itself restarts after a server reboot. The pm2 save command serializes the current process list so PM2 can restore it on startup.

Systemd Alternative

If you prefer not to use PM2, systemd works directly:

# /etc/systemd/system/myapp.service
[Unit]
Description=My Node.js Application
After=network.target

[Service]
Type=simple
User=deploy
WorkingDirectory=/opt/myapp
ExecStart=/usr/bin/node app.js
Restart=on-failure
RestartSec=5
Environment=NODE_ENV=production
Environment=PORT=3000
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target
sudo systemctl enable myapp
sudo systemctl start myapp
sudo systemctl status myapp

3. Reverse Proxy with Nginx

Never expose Node.js directly to the internet. Nginx handles SSL termination, static file serving, request buffering, connection limiting, and load balancing far more efficiently than Node.js can.

# /etc/nginx/sites-available/myapp
upstream nodejs_backend {
    server 127.0.0.1:3000;
    keepalive 64;
}

server {
    listen 80;
    server_name example.com www.example.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com www.example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 1d;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

    # Static files — serve directly from Nginx
    location /static/ {
        alias /opt/myapp/static/;
        expires 30d;
        add_header Cache-Control "public, immutable";
        access_log off;
    }

    # Proxy to Node.js
    location / {
        proxy_pass http://nodejs_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
        proxy_read_timeout 90s;
        proxy_send_timeout 90s;
    }

    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml text/javascript image/svg+xml;
}
sudo nginx -t
sudo systemctl reload nginx

4. SSL/TLS Configuration

Use Let's Encrypt with Certbot for free, automated SSL certificates:

sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d example.com -d www.example.com

Certbot installs a cron job for automatic renewal. Verify it works:

sudo certbot renew --dry-run

If you are behind a load balancer or CDN that terminates SSL, tell Express to trust the proxy so req.protocol and req.secure work correctly:

app.set("trust proxy", 1);

5. Structured Logging

Console.log is not production logging. You need structured, leveled, rotatable logs. Winston and Pino are the two dominant choices. Pino is faster; Winston is more flexible. I use Pino for high-throughput services and Winston when I need custom transports.

var pino = require("pino");

var logger = pino({
  level: process.env.LOG_LEVEL || "info",
  timestamp: pino.stdTimeFunctions.isoTime,
  formatters: {
    level: function(label) {
      return { level: label };
    }
  },
  redact: {
    paths: ["req.headers.authorization", "req.headers.cookie", "body.password"],
    censor: "[REDACTED]"
  }
});

module.exports = logger;

Add request logging middleware with pino-http:

var pinoHttp = require("pino-http");

app.use(pinoHttp({
  logger: logger,
  autoLogging: {
    ignore: function(req) {
      return req.url === "/health";
    }
  },
  customLogLevel: function(req, res, err) {
    if (res.statusCode >= 500 || err) return "error";
    if (res.statusCode >= 400) return "warn";
    return "info";
  }
}));

Log Rotation

If you write logs to files, rotate them to prevent disk exhaustion. Use logrotate on Linux:

# /etc/logrotate.d/myapp
/var/log/myapp/*.log {
    daily
    rotate 14
    compress
    delaycompress
    missingok
    notifempty
    copytruncate
}

Alternatively, pipe PM2 logs through pm2-logrotate:

pm2 install pm2-logrotate
pm2 set pm2-logrotate:max_size 50M
pm2 set pm2-logrotate:retain 14
pm2 set pm2-logrotate:compress true

6. Health Check Endpoints

Every production service needs a health check endpoint. Load balancers, container orchestrators, and monitoring systems poll this endpoint to determine if your service is alive and ready to accept traffic.

var mongoose = require("mongoose");

app.get("/health", function(req, res) {
  var healthcheck = {
    status: "ok",
    uptime: process.uptime(),
    timestamp: new Date().toISOString(),
    checks: {}
  };

  // Check database connectivity
  try {
    var dbState = mongoose.connection.readyState;
    healthcheck.checks.database = dbState === 1 ? "connected" : "disconnected";
    if (dbState !== 1) {
      healthcheck.status = "degraded";
    }
  } catch (err) {
    healthcheck.status = "error";
    healthcheck.checks.database = "error: " + err.message;
  }

  // Check memory usage
  var memUsage = process.memoryUsage();
  healthcheck.checks.memory = {
    rss: Math.round(memUsage.rss / 1024 / 1024) + "MB",
    heapUsed: Math.round(memUsage.heapUsed / 1024 / 1024) + "MB",
    heapTotal: Math.round(memUsage.heapTotal / 1024 / 1024) + "MB"
  };

  var statusCode = healthcheck.status === "ok" ? 200 : 503;
  res.status(statusCode).json(healthcheck);
});

Sample output:

{
  "status": "ok",
  "uptime": 86423.119,
  "timestamp": "2026-02-13T14:30:00.000Z",
  "checks": {
    "database": "connected",
    "memory": {
      "rss": "87MB",
      "heapUsed": "54MB",
      "heapTotal": "72MB"
    }
  }
}

7. Graceful Shutdown

When your process receives a termination signal (during deployments, scaling events, or server maintenance), it must stop accepting new connections and finish processing in-flight requests before exiting. Failing to do this results in dropped requests and broken client connections.

var server = app.listen(port, function() {
  logger.info("Server listening on port " + port);
  // Signal PM2 that the app is ready (if using wait_ready)
  if (process.send) {
    process.send("ready");
  }
});

function gracefulShutdown(signal) {
  logger.info("Received " + signal + ". Starting graceful shutdown...");

  // Stop accepting new connections
  server.close(function() {
    logger.info("HTTP server closed. Cleaning up resources...");

    // Close database connections
    mongoose.connection.close(false, function() {
      logger.info("MongoDB connection closed.");
      process.exit(0);
    });
  });

  // Force exit after timeout if graceful shutdown stalls
  setTimeout(function() {
    logger.error("Graceful shutdown timed out. Forcing exit.");
    process.exit(1);
  }, 10000);
}

process.on("SIGTERM", function() { gracefulShutdown("SIGTERM"); });
process.on("SIGINT", function() { gracefulShutdown("SIGINT"); });

8. Memory and CPU Monitoring

Node.js applications can develop memory leaks that are invisible until the process crashes. Monitor heap usage over time.

var os = require("os");

// Periodic metrics logging
setInterval(function() {
  var mem = process.memoryUsage();
  var cpuUsage = process.cpuUsage();

  logger.info({
    type: "metrics",
    memory: {
      rss: Math.round(mem.rss / 1024 / 1024),
      heapUsed: Math.round(mem.heapUsed / 1024 / 1024),
      heapTotal: Math.round(mem.heapTotal / 1024 / 1024),
      external: Math.round(mem.external / 1024 / 1024)
    },
    cpu: {
      user: cpuUsage.user,
      system: cpuUsage.system
    },
    system: {
      loadAvg: os.loadavg(),
      freeMemMB: Math.round(os.freemem() / 1024 / 1024),
      totalMemMB: Math.round(os.totalmem() / 1024 / 1024)
    },
    eventLoop: {
      uptime: process.uptime()
    }
  }, "Application metrics snapshot");
}, 60000);

PM2 also provides built-in monitoring:

pm2 monit
pm2 plus   # cloud-based monitoring dashboard

Set the max_memory_restart option in your PM2 config to automatically restart workers that exceed a memory threshold. This is a safety net, not a fix — investigate the underlying leak.


9. Error Tracking with Sentry

Logs tell you something went wrong. Error tracking services like Sentry tell you exactly what, where, how often, and which users are affected. The integration is minimal:

var Sentry = require("@sentry/node");

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,
  release: require("./package.json").version,
  tracesSampleRate: 0.1,
  integrations: [
    new Sentry.Integrations.Http({ tracing: true }),
    new Sentry.Integrations.Express({ app: app })
  ]
});

// Sentry request handler — must be first middleware
app.use(Sentry.Handlers.requestHandler());
app.use(Sentry.Handlers.tracingHandler());

// ... your routes go here ...

// Sentry error handler — must be before any other error handlers
app.use(Sentry.Handlers.errorHandler());

// Your fallback error handler
app.use(function(err, req, res, next) {
  logger.error({ err: err, url: req.url }, "Unhandled error");
  res.status(500).json({ error: "Internal server error" });
});

Also catch unhandled rejections and uncaught exceptions:

process.on("unhandledRejection", function(reason, promise) {
  logger.error({ reason: reason }, "Unhandled promise rejection");
  Sentry.captureException(reason);
});

process.on("uncaughtException", function(err) {
  logger.fatal({ err: err }, "Uncaught exception — shutting down");
  Sentry.captureException(err);
  // Give Sentry time to flush
  setTimeout(function() {
    process.exit(1);
  }, 2000);
});

10. Database Connection Pooling

Opening a new database connection for every request is a performance killer. Use connection pooling.

MongoDB with Mongoose

Mongoose pools connections by default. Tune the pool size based on your workload:

var mongoose = require("mongoose");

mongoose.connect(process.env.DB_MONGO, {
  maxPoolSize: 20,
  minPoolSize: 5,
  serverSelectionTimeoutMS: 5000,
  socketTimeoutMS: 45000,
  family: 4
});

mongoose.connection.on("error", function(err) {
  logger.error({ err: err }, "MongoDB connection error");
});

mongoose.connection.on("disconnected", function() {
  logger.warn("MongoDB disconnected. Attempting reconnect...");
});

PostgreSQL with pg

var Pool = require("pg").Pool;

var pool = new Pool({
  connectionString: process.env.POSTGRES_CONNECTION_STRING,
  max: 20,
  min: 5,
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 5000
});

pool.on("error", function(err) {
  logger.error({ err: err }, "Unexpected PostgreSQL pool error");
});

function query(text, params) {
  return pool.query(text, params);
}

module.exports = { query: query, pool: pool };

11. Static Asset Serving and Caching

Let Nginx serve static files directly — it is dramatically faster than Node.js at serving static content. The Nginx config above handles this with the /static/ location block.

For assets served by Express (when Nginx is not in front), set proper caching headers:

var path = require("path");

app.use("/static", express.static(path.join(__dirname, "static"), {
  maxAge: "30d",
  etag: true,
  lastModified: true,
  immutable: true
}));

For cache-busting, append a version hash to asset URLs in your build process. This lets you set aggressive cache headers while ensuring users always get the latest version.


12. Security Headers and Hardening

Use the helmet middleware. It sets a collection of HTTP security headers with sensible defaults:

var helmet = require("helmet");

app.use(helmet({
  contentSecurityPolicy: {
    directives: {
      defaultSrc: ["'self'"],
      scriptSrc: ["'self'", "https://cdn.jsdelivr.net"],
      styleSrc: ["'self'", "'unsafe-inline'", "https://fonts.googleapis.com"],
      imgSrc: ["'self'", "data:", "https:"],
      fontSrc: ["'self'", "https://fonts.gstatic.com"],
      connectSrc: ["'self'"]
    }
  },
  crossOriginEmbedderPolicy: false
}));

Additional hardening:

// Rate limiting
var rateLimit = require("express-rate-limit");

var limiter = rateLimit({
  windowMs: 15 * 60 * 1000,
  max: 100,
  standardHeaders: true,
  legacyHeaders: false,
  message: { error: "Too many requests. Please try again later." }
});

app.use("/api/", limiter);

// Disable X-Powered-By (helmet does this, but explicit is good)
app.disable("x-powered-by");

// Limit request body size
app.use(express.json({ limit: "1mb" }));
app.use(express.urlencoded({ extended: true, limit: "1mb" }));

13. Performance Baselines and Benchmarks

Before you deploy, establish performance baselines. Use autocannon for HTTP benchmarking:

npm install -g autocannon
autocannon -c 100 -d 30 http://localhost:3000/

Sample output:

Running 30s test @ http://localhost:3000/
100 connections

┌─────────┬──────┬──────┬───────┬──────┬─────────┬─────────┬───────┐
│ Stat    │ 2.5% │ 50%  │ 97.5% │ 99%  │ Avg     │ Stdev   │ Max   │
├─────────┼──────┼──────┼───────┼──────┼─────────┼─────────┼───────┤
│ Latency │ 2 ms │ 5 ms │ 18 ms │ 24ms │ 6.12 ms │ 4.89 ms │ 89 ms │
└─────────┴──────┴──────┴───────┴──────┴─────────┴─────────┴───────┘
┌───────────┬─────────┬─────────┬─────────┬─────────┬──────────┬─────────┬─────────┐
│ Stat      │ 1%      │ 2.5%    │ 50%     │ 97.5%   │ Avg      │ Stdev   │ Min     │
├───────────┼─────────┼─────────┼─────────┼─────────┼──────────┼─────────┼─────────┤
│ Req/Sec   │ 12,415  │ 13,207  │ 16,891  │ 18,303  │ 16,422   │ 1,489   │ 12,410  │
└───────────┴─────────┴─────────┴─────────┴─────────┴──────────┴─────────┴─────────┘

492k requests in 30s, 198 MB read

Record these numbers. After every deployment, run the same benchmark and compare. A sudden drop in throughput or spike in latency signals a regression.


14. Backup Strategies

Database Backups

Automate daily backups and test restores regularly. A backup you have never restored is not a backup.

#!/bin/bash
# /opt/scripts/backup-mongo.sh
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/opt/backups/mongodb"
RETENTION_DAYS=30

mkdir -p "$BACKUP_DIR"

mongodump --uri="$DB_MONGO" --gzip --archive="$BACKUP_DIR/backup_$TIMESTAMP.gz"

# Remove backups older than retention period
find "$BACKUP_DIR" -name "*.gz" -mtime +$RETENTION_DAYS -delete

echo "Backup completed: backup_$TIMESTAMP.gz"
# Crontab: daily at 3 AM
0 3 * * * /opt/scripts/backup-mongo.sh >> /var/log/backup-mongo.log 2>&1

Application Backups

Store your deployment artifacts (or at minimum, track exact Git SHAs). If your release process builds assets, archive the built output alongside the commit hash:

git rev-parse HEAD > /opt/myapp/RELEASE

15. Rollback Procedures

You need the ability to roll back to the previous version in under two minutes. Symlink-based deployments make this straightforward:

#!/bin/bash
# deploy.sh
APP_DIR="/opt/myapp"
RELEASES_DIR="/opt/releases"
RELEASE_NAME="release_$(date +%Y%m%d_%H%M%S)"
CURRENT_LINK="$APP_DIR"

# Clone and build
git clone --depth 1 --branch "$1" [email protected]:org/myapp.git "$RELEASES_DIR/$RELEASE_NAME"
cd "$RELEASES_DIR/$RELEASE_NAME"
npm ci --production

# Swap symlink atomically
ln -sfn "$RELEASES_DIR/$RELEASE_NAME" "$CURRENT_LINK"

# Reload PM2
pm2 reload ecosystem.config.js --env production

# Keep only last 5 releases
ls -dt "$RELEASES_DIR"/release_* | tail -n +6 | xargs rm -rf

echo "Deployed $RELEASE_NAME"

To rollback:

#!/bin/bash
# rollback.sh
RELEASES_DIR="/opt/releases"
APP_DIR="/opt/myapp"

PREVIOUS=$(ls -dt "$RELEASES_DIR"/release_* | sed -n '2p')

if [ -z "$PREVIOUS" ]; then
  echo "No previous release found."
  exit 1
fi

ln -sfn "$PREVIOUS" "$APP_DIR"
pm2 reload ecosystem.config.js --env production

echo "Rolled back to $(basename $PREVIOUS)"

16. Zero-Downtime Deployment

PM2 cluster mode enables zero-downtime reloads. When you run pm2 reload, PM2 restarts workers one at a time, waiting for each new worker to signal readiness before killing the old one.

Your application must signal readiness:

var server = app.listen(port, function() {
  logger.info("Worker " + process.pid + " listening on port " + port);
  if (process.send) {
    process.send("ready");
  }
});

And your PM2 config must include:

{
  wait_ready: true,
  listen_timeout: 10000,
  kill_timeout: 5000
}

The deployment sequence:

cd /opt/myapp
git pull origin master
npm ci --production
pm2 reload ecosystem.config.js --env production

For more sophisticated setups, use blue-green deployments. Run two identical environments behind a load balancer. Deploy to the inactive environment, verify it passes health checks, then switch traffic.


17. The Complete Checklist

Before First Deployment

  • NODE_ENV set to production
  • All secrets stored as environment variables, not in code
  • Required environment variables validated at startup
  • .env file excluded from version control

Process Management

  • PM2 or systemd configured for automatic restarts
  • Cluster mode enabled (utilizing all CPU cores)
  • Memory limit configured for automatic restart on leak
  • Process manager starts on server boot

Networking & SSL

  • Nginx reverse proxy configured
  • SSL certificate installed and auto-renewing
  • HTTP-to-HTTPS redirect in place
  • trust proxy set if behind a proxy/load balancer
  • WebSocket upgrade headers configured (if applicable)

Logging & Monitoring

  • Structured logging with Pino or Winston
  • Log levels set appropriately (info in production, not debug)
  • Sensitive data redacted from logs
  • Log rotation configured (logrotate or pm2-logrotate)
  • Error tracking service (Sentry) integrated
  • Unhandled rejection and uncaught exception handlers installed
  • Health check endpoint returning database and memory status

Security

  • Helmet middleware enabled with CSP configured
  • Rate limiting on API endpoints
  • Request body size limited
  • X-Powered-By header disabled
  • CORS configured correctly (not wildcard in production)
  • Dependencies audited (npm audit)

Performance

  • Database connection pooling configured
  • Static assets served by Nginx with cache headers
  • Gzip compression enabled at Nginx level
  • Performance baseline established with benchmarks
  • npm ci --production used (no devDependencies)

Reliability

  • Graceful shutdown handles SIGTERM and SIGINT
  • Database connections closed during shutdown
  • In-flight requests complete before process exits
  • Automated database backups with tested restore procedure
  • Rollback procedure documented and tested
  • Zero-downtime deployment verified

Complete Working Example

Here is a production-ready Express application incorporating every checklist item discussed above:

// app.js — Production-ready Node.js application
require("dotenv").config();

var express = require("express");
var helmet = require("helmet");
var rateLimit = require("express-rate-limit");
var mongoose = require("mongoose");
var Sentry = require("@sentry/node");
var pino = require("pino");
var pinoHttp = require("pino-http");
var os = require("os");

// ─── Validate environment ─────────────────────────────────
var requiredVars = ["DB_MONGO", "SESSION_SECRET"];
requiredVars.forEach(function(name) {
  if (!process.env[name]) {
    console.error("FATAL: Missing required env var: " + name);
    process.exit(1);
  }
});

// ─── Logger ───────────────────────────────────────────────
var logger = pino({
  level: process.env.LOG_LEVEL || "info",
  timestamp: pino.stdTimeFunctions.isoTime,
  redact: {
    paths: ["req.headers.authorization", "req.headers.cookie"],
    censor: "[REDACTED]"
  }
});

// ─── Sentry ───────────────────────────────────────────────
if (process.env.SENTRY_DSN) {
  Sentry.init({
    dsn: process.env.SENTRY_DSN,
    environment: process.env.NODE_ENV || "development",
    tracesSampleRate: 0.1
  });
}

// ─── Express app ──────────────────────────────────────────
var app = express();
var port = process.env.PORT || 3000;

app.set("trust proxy", 1);
app.disable("x-powered-by");

// Sentry request handlers (must be first)
if (process.env.SENTRY_DSN) {
  app.use(Sentry.Handlers.requestHandler());
}

// Security middleware
app.use(helmet({
  contentSecurityPolicy: {
    directives: {
      defaultSrc: ["'self'"],
      scriptSrc: ["'self'"],
      styleSrc: ["'self'", "'unsafe-inline'"]
    }
  }
}));

// Rate limiting
app.use("/api/", rateLimit({
  windowMs: 15 * 60 * 1000,
  max: 100,
  standardHeaders: true,
  legacyHeaders: false
}));

// Body parsing with size limits
app.use(express.json({ limit: "1mb" }));
app.use(express.urlencoded({ extended: true, limit: "1mb" }));

// Request logging (skip health checks)
app.use(pinoHttp({
  logger: logger,
  autoLogging: {
    ignore: function(req) { return req.url === "/health"; }
  }
}));

// Static files with caching
var path = require("path");
app.use("/static", express.static(path.join(__dirname, "static"), {
  maxAge: "30d",
  etag: true,
  immutable: true
}));

// ─── Health check ─────────────────────────────────────────
app.get("/health", function(req, res) {
  var dbState = mongoose.connection.readyState;
  var mem = process.memoryUsage();
  var status = dbState === 1 ? "ok" : "degraded";

  res.status(status === "ok" ? 200 : 503).json({
    status: status,
    uptime: process.uptime(),
    timestamp: new Date().toISOString(),
    checks: {
      database: dbState === 1 ? "connected" : "disconnected",
      memory: {
        heapUsedMB: Math.round(mem.heapUsed / 1024 / 1024),
        rssMB: Math.round(mem.rss / 1024 / 1024)
      }
    }
  });
});

// ─── Application routes ───────────────────────────────────
app.get("/", function(req, res) {
  res.json({ message: "Application running", version: require("./package.json").version });
});

app.get("/api/example", function(req, res) {
  res.json({ data: "Hello from production" });
});

// ─── Error handling ───────────────────────────────────────
if (process.env.SENTRY_DSN) {
  app.use(Sentry.Handlers.errorHandler());
}

app.use(function(err, req, res, next) {
  logger.error({ err: err, url: req.url, method: req.method }, "Unhandled error");
  res.status(500).json({ error: "Internal server error" });
});

// ─── Database connection ──────────────────────────────────
mongoose.connect(process.env.DB_MONGO, {
  maxPoolSize: 20,
  minPoolSize: 5,
  serverSelectionTimeoutMS: 5000
});

mongoose.connection.on("connected", function() {
  logger.info("MongoDB connected");
});

mongoose.connection.on("error", function(err) {
  logger.error({ err: err }, "MongoDB connection error");
});

// ─── Start server ─────────────────────────────────────────
var server = app.listen(port, function() {
  logger.info("Server listening on port " + port + " (PID: " + process.pid + ")");
  if (process.send) {
    process.send("ready");
  }
});

// ─── Metrics logging ─────────────────────────────────────
var metricsInterval = setInterval(function() {
  var mem = process.memoryUsage();
  logger.info({
    type: "metrics",
    heapUsedMB: Math.round(mem.heapUsed / 1024 / 1024),
    rssMB: Math.round(mem.rss / 1024 / 1024),
    loadAvg: os.loadavg(),
    uptime: process.uptime()
  }, "Periodic metrics");
}, 60000);

// ─── Graceful shutdown ────────────────────────────────────
function gracefulShutdown(signal) {
  logger.info("Received " + signal + ". Shutting down gracefully...");

  clearInterval(metricsInterval);

  server.close(function() {
    logger.info("HTTP server closed");

    mongoose.connection.close(false, function() {
      logger.info("Database connections closed");
      process.exit(0);
    });
  });

  setTimeout(function() {
    logger.error("Shutdown timed out. Forcing exit.");
    process.exit(1);
  }, 10000);
}

process.on("SIGTERM", function() { gracefulShutdown("SIGTERM"); });
process.on("SIGINT", function() { gracefulShutdown("SIGINT"); });

process.on("unhandledRejection", function(reason) {
  logger.error({ reason: reason }, "Unhandled rejection");
  if (process.env.SENTRY_DSN) Sentry.captureException(reason);
});

process.on("uncaughtException", function(err) {
  logger.fatal({ err: err }, "Uncaught exception");
  if (process.env.SENTRY_DSN) Sentry.captureException(err);
  setTimeout(function() { process.exit(1); }, 2000);
});

module.exports = app;

Corresponding Nginx configuration for this application:

upstream node_app {
    server 127.0.0.1:3000;
    keepalive 64;
}

server {
    listen 80;
    server_name example.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;

    add_header Strict-Transport-Security "max-age=63072000" always;

    location /static/ {
        alias /opt/myapp/static/;
        expires 30d;
        access_log off;
    }

    location /health {
        proxy_pass http://node_app;
        access_log off;
    }

    location / {
        proxy_pass http://node_app;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

    gzip on;
    gzip_types text/plain text/css application/json application/javascript text/xml;
}

Common Issues and Troubleshooting

1. EADDRINUSE — Port Already in Use

Error: listen EADDRINUSE: address already in use :::3000
    at Server.setupListenHandle [as _setupListenHandle] (node:net:1740:16)

Another process is already bound to the port. Find and kill it:

lsof -i :3000
kill -9 <PID>

In production, this typically means a previous instance did not shut down cleanly. Check that your graceful shutdown handler closes the server before exiting, and ensure PM2's kill_timeout is long enough for your shutdown logic to complete.

2. ENOMEM — JavaScript Heap Out of Memory

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: 0xb090e0 node::Abort() [node]
 2: 0xa1b70e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]

Your application has a memory leak or is processing data sets too large for the default heap. Increase the heap size as a stopgap, then find the leak:

node --max-old-space-size=2048 app.js

To diagnose, take heap snapshots in development using --inspect and Chrome DevTools, or use clinic.js:

npx clinic doctor -- node app.js

3. ECONNREFUSED — Database Connection Failure

MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
    at Timeout._onTimeout (/opt/myapp/node_modules/mongodb/lib/sdam/topology.js:293:38)

The database server is unreachable. Verify the database is running, the connection string is correct, and firewall rules allow the connection. In cloud environments, check that security groups permit traffic on the database port. Also verify that serverSelectionTimeoutMS is set so the app fails fast rather than hanging indefinitely.

4. 502 Bad Gateway from Nginx

502 Bad Gateway
nginx/1.18.0

Nginx cannot reach your Node.js backend. Check that:

# Is the Node app running?
pm2 status

# Is it listening on the expected port?
curl -v http://127.0.0.1:3000/health

# Check Nginx error log for details
tail -50 /var/log/nginx/error.log

Common causes: the Node process crashed and PM2 is still restarting it, the port in your Nginx config does not match the port your app listens on, or the upstream is configured with the wrong address. The Nginx error log will contain a line like connect() failed (111: Connection refused) which confirms the backend is unreachable.

5. EMFILE — Too Many Open Files

Error: EMFILE, too many open files '/opt/myapp/static/image.png'

The OS file descriptor limit is too low for your traffic volume. Increase it:

# Check current limit
ulimit -n

# Increase temporarily
ulimit -n 65536

# Permanent: add to /etc/security/limits.conf
deploy soft nofile 65536
deploy hard nofile 65536

Best Practices

  • Never run Node.js as root. Create a dedicated deploy user with minimal privileges. If you need port 80/443, use Nginx as a reverse proxy rather than running Node as root.

  • Use npm ci instead of npm install in production. It installs from the lockfile deterministically, is faster, and fails if package-lock.json is out of sync with package.json. Always include --production to skip devDependencies.

  • Pin your Node.js version. Use .nvmrc or .node-version files and enforce the version in your CI pipeline. A minor version difference can introduce subtle behavior changes that are painful to debug in production.

  • Run npm audit in your CI pipeline and block deployments with critical vulnerabilities. Do not let known vulnerable dependencies reach production. Schedule weekly audits even outside deployments.

  • Set up alerting, not just monitoring. Dashboards that nobody watches are useless. Configure alerts for health check failures, error rate spikes, memory threshold breaches, and response time degradation. PagerDuty, Opsgenie, or even a Slack webhook connected to your monitoring system is enough to start.

  • Test your rollback procedure before you need it. Do a rollback drill at least once per quarter. If the first time you try to rollback is during an outage at 3 AM, you will discover all the steps you forgot to document.

  • Keep deployments small and frequent. Large releases with dozens of changes make it impossible to identify which change caused a regression. Deploy often, deploy small, and you will always know exactly what changed.

  • Separate build and runtime. Run npm ci --production and any build steps in a staging environment or CI pipeline. The production server should receive a pre-built artifact, not run build tools.


References

Powered by Contentful