Nodejs

Node.js Performance Optimization Techniques

A practical guide to Node.js performance optimization covering event loop management, memory optimization, caching strategies, database query tuning, and profiling tools.

Node.js Performance Optimization Techniques

Node.js is fast for I/O-bound workloads — handling HTTP requests, database queries, and file operations. It uses a single-threaded event loop that handles thousands of concurrent connections without the overhead of thread management. But that single thread is both a strength and a constraint. One CPU-intensive operation blocks all other requests. One memory leak gradually degrades the entire process.

Performance optimization in Node.js means keeping the event loop unblocked, using memory efficiently, reducing unnecessary work through caching, and understanding where your application spends its time. This guide covers the techniques that have the highest impact in production.

Understanding the Event Loop

How It Works

Node.js runs JavaScript on a single thread. When your code makes an asynchronous call (database query, file read, HTTP request), Node.js delegates the work to the operating system or a thread pool, then continues executing other code. When the async operation completes, its callback is placed on the event loop queue and executed when the thread is free.

// These two queries run concurrently, not sequentially
var startTime = Date.now();

db.query("SELECT * FROM users") // Starts immediately
  .then(function(users) {
    console.log("Users fetched in " + (Date.now() - startTime) + "ms");
  });

db.query("SELECT * FROM orders") // Also starts immediately
  .then(function(orders) {
    console.log("Orders fetched in " + (Date.now() - startTime) + "ms");
  });

Both queries are dispatched to the database driver simultaneously. The event loop is free to handle other requests while waiting.

Blocking the Event Loop

Synchronous, CPU-intensive operations block the event loop. While the thread computes, no other requests are processed:

// BAD — blocks the event loop for all users
app.get("/api/report", function(req, res) {
  var data = generateLargeReport(); // 500ms of CPU work
  res.json(data); // All other requests waited 500ms
});

// BETTER — break work into chunks
app.get("/api/report", function(req, res) {
  var items = getLargeDataset();
  var results = [];
  var index = 0;
  var chunkSize = 100;

  function processChunk() {
    var end = Math.min(index + chunkSize, items.length);

    for (var i = index; i < end; i++) {
      results.push(transformItem(items[i]));
    }

    index = end;

    if (index < items.length) {
      setImmediate(processChunk); // Yield to the event loop between chunks
    } else {
      res.json(results);
    }
  }

  processChunk();
});

setImmediate places the next chunk on the event loop queue, allowing other requests to be processed between chunks.

Event Loop Monitoring

// Monitor event loop lag
var lastCheck = Date.now();

setInterval(function() {
  var now = Date.now();
  var lag = now - lastCheck - 1000;
  lastCheck = now;

  if (lag > 100) {
    console.warn("Event loop lag: " + lag + "ms");
  }
}, 1000);

Consistent lag above 50ms indicates the event loop is being blocked. Check for synchronous operations, large JSON parsing, or CPU-intensive computations.

Memory Optimization

Understanding Memory Usage

function logMemory() {
  var usage = process.memoryUsage();
  console.log({
    rss: Math.round(usage.rss / 1024 / 1024) + " MB",       // Total allocated
    heapTotal: Math.round(usage.heapTotal / 1024 / 1024) + " MB", // V8 heap total
    heapUsed: Math.round(usage.heapUsed / 1024 / 1024) + " MB",   // V8 heap used
    external: Math.round(usage.external / 1024 / 1024) + " MB"    // C++ objects
  });
}

setInterval(logMemory, 30000);
  • rss — Resident Set Size. Total memory allocated to the process.
  • heapTotal — V8's heap size. Grows as objects are created.
  • heapUsed — Memory actively used by JavaScript objects.
  • external — Memory used by C++ objects bound to JavaScript (Buffers, etc.).

Avoiding Memory Leaks

Unbounded caches:

// BAD — cache grows without limit
var cache = {};

app.get("/api/users/:id", function(req, res) {
  var id = req.params.id;

  if (cache[id]) {
    return res.json(cache[id]);
  }

  db.query("SELECT * FROM users WHERE id = $1", [id])
    .then(function(result) {
      cache[id] = result.rows[0]; // Never evicted
      res.json(result.rows[0]);
    });
});

// GOOD — cache with size limit and TTL
function createCache(maxSize, ttlMs) {
  var entries = {};
  var keys = [];

  return {
    get: function(key) {
      var entry = entries[key];
      if (!entry) return null;
      if (Date.now() - entry.time > ttlMs) {
        delete entries[key];
        keys.splice(keys.indexOf(key), 1);
        return null;
      }
      return entry.value;
    },
    set: function(key, value) {
      if (keys.length >= maxSize) {
        var oldest = keys.shift();
        delete entries[oldest];
      }
      entries[key] = { value: value, time: Date.now() };
      keys.push(key);
    }
  };
}

var userCache = createCache(1000, 5 * 60 * 1000); // 1000 entries, 5 min TTL

Event listener accumulation:

// BAD — adds a new listener on every request
app.get("/api/stream", function(req, res) {
  someEmitter.on("data", function(data) {
    res.write(JSON.stringify(data));
  });
});

// GOOD — remove listener when done
app.get("/api/stream", function(req, res) {
  function onData(data) {
    res.write(JSON.stringify(data));
  }

  someEmitter.on("data", onData);

  req.on("close", function() {
    someEmitter.removeListener("data", onData);
  });
});

Closures holding references:

// BAD — closure holds reference to large data
function processData(largeDataset) {
  var results = transform(largeDataset);

  return function getResults() {
    return results;
    // largeDataset is also held in memory by the closure
  };
}

// GOOD — nullify references when done
function processData(largeDataset) {
  var results = transform(largeDataset);
  largeDataset = null; // Allow garbage collection

  return function getResults() {
    return results;
  };
}

Caching Strategies

In-Memory Caching

For frequently accessed data that changes infrequently:

// Simple TTL cache
function createTTLCache(defaultTTL) {
  var store = {};

  function get(key) {
    var entry = store[key];
    if (!entry) return null;
    if (Date.now() > entry.expires) {
      delete store[key];
      return null;
    }
    return entry.value;
  }

  function set(key, value, ttl) {
    store[key] = {
      value: value,
      expires: Date.now() + (ttl || defaultTTL)
    };
  }

  function del(key) {
    delete store[key];
  }

  // Periodic cleanup
  setInterval(function() {
    var now = Date.now();
    Object.keys(store).forEach(function(key) {
      if (now > store[key].expires) {
        delete store[key];
      }
    });
  }, 60000);

  return { get: get, set: set, del: del };
}

var cache = createTTLCache(300000); // 5 minute default TTL

Response Caching Middleware

function cacheResponse(ttlSeconds) {
  var cache = {};

  return function(req, res, next) {
    var key = req.originalUrl;
    var cached = cache[key];

    if (cached && Date.now() < cached.expires) {
      res.setHeader("X-Cache", "HIT");
      return res.json(cached.data);
    }

    // Override res.json to capture the response
    var originalJson = res.json.bind(res);
    res.json = function(data) {
      cache[key] = {
        data: data,
        expires: Date.now() + (ttlSeconds * 1000)
      };
      res.setHeader("X-Cache", "MISS");
      return originalJson(data);
    };

    next();
  };
}

// Cache article listing for 5 minutes
app.get("/api/articles", cacheResponse(300), function(req, res) {
  db.query("SELECT * FROM articles WHERE status = 'published' ORDER BY created_at DESC LIMIT 20")
    .then(function(result) {
      res.json(result.rows);
    });
});

HTTP Cache Headers

Let browsers and CDNs cache responses:

// Static data — cache for 1 day
app.get("/api/categories", function(req, res) {
  res.set("Cache-Control", "public, max-age=86400");
  res.json(categories);
});

// User-specific data — no caching
app.get("/api/profile", authenticate, function(req, res) {
  res.set("Cache-Control", "private, no-cache");
  res.json(req.user);
});

// Conditional caching with ETag
var crypto = require("crypto");

app.get("/api/config", function(req, res) {
  var data = JSON.stringify(appConfig);
  var etag = crypto.createHash("md5").update(data).digest("hex");

  if (req.headers["if-none-match"] === etag) {
    return res.status(304).end(); // Not Modified
  }

  res.set("ETag", etag);
  res.set("Cache-Control", "public, max-age=3600");
  res.json(appConfig);
});

Database Optimization

Connection Pooling

var { Pool } = require("pg");

var pool = new Pool({
  connectionString: process.env.DATABASE_URL,
  max: 20,                    // Maximum connections
  idleTimeoutMillis: 30000,   // Close idle connections after 30s
  connectionTimeoutMillis: 5000 // Fail if connection takes > 5s
});

// Monitor pool health
setInterval(function() {
  if (pool.waitingCount > 0) {
    console.warn("Database pool exhausted", {
      total: pool.totalCount,
      idle: pool.idleCount,
      waiting: pool.waitingCount
    });
  }
}, 10000);

Query Optimization

// BAD — fetches all columns when only a few are needed
db.query("SELECT * FROM articles WHERE status = 'published' ORDER BY created_at DESC LIMIT 20");

// GOOD — fetch only needed columns
db.query("SELECT id, title, slug, synopsis, created_at FROM articles WHERE status = 'published' ORDER BY created_at DESC LIMIT 20");

// BAD — N+1 query pattern
function getArticlesWithAuthors() {
  return db.query("SELECT * FROM articles LIMIT 20")
    .then(function(articles) {
      return Promise.all(articles.rows.map(function(article) {
        return db.query("SELECT name FROM users WHERE id = $1", [article.author_id])
          .then(function(user) {
            article.author = user.rows[0];
            return article;
          });
      }));
    });
}

// GOOD — single query with JOIN
function getArticlesWithAuthors() {
  return db.query(
    "SELECT a.id, a.title, a.slug, a.synopsis, a.created_at, u.name as author_name " +
    "FROM articles a " +
    "JOIN users u ON a.author_id = u.id " +
    "WHERE a.status = 'published' " +
    "ORDER BY a.created_at DESC " +
    "LIMIT 20"
  );
}

Prepared Statements

// Prepared statements are cached by the database, improving performance
// for repeated queries. The pg library handles this automatically when
// you pass a name:

function getArticleBySlug(slug) {
  return pool.query({
    name: "get-article-by-slug",
    text: "SELECT * FROM articles WHERE slug = $1",
    values: [slug]
  });
}

JSON Optimization

Avoid Large JSON Serialization

// BAD — serializing a huge object blocks the event loop
app.get("/api/export", function(req, res) {
  db.query("SELECT * FROM events") // 100,000 rows
    .then(function(result) {
      res.json(result.rows); // JSON.stringify blocks for seconds
    });
});

// GOOD — stream JSON output
app.get("/api/export", function(req, res) {
  res.setHeader("Content-Type", "application/json");
  res.write("[");

  var cursor = db.query(new Cursor("SELECT * FROM events"));
  var first = true;

  function readBatch() {
    cursor.read(100, function(err, rows) {
      if (err) {
        res.end("]");
        return;
      }

      if (rows.length === 0) {
        res.end("]");
        return;
      }

      rows.forEach(function(row) {
        if (!first) res.write(",");
        res.write(JSON.stringify(row));
        first = false;
      });

      setImmediate(readBatch);
    });
  }

  readBatch();
});

Selective Serialization

// Only include fields the client needs
function serializeArticle(article) {
  return {
    id: article.id,
    title: article.title,
    slug: article.slug,
    synopsis: article.synopsis,
    publishedAt: article.published_at
  };
}

app.get("/api/articles", function(req, res) {
  db.query("SELECT id, title, slug, synopsis, published_at FROM articles LIMIT 20")
    .then(function(result) {
      res.json(result.rows.map(serializeArticle));
    });
});

Compression

npm install compression
var compression = require("compression");

app.use(compression({
  level: 6,           // Compression level (1-9, default 6)
  threshold: 1024,    // Only compress responses > 1KB
  filter: function(req, res) {
    // Don't compress if the client doesn't support it
    if (req.headers["x-no-compression"]) return false;
    return compression.filter(req, res);
  }
}));

Compression reduces response sizes by 60-80% for text-based content. The CPU cost is minimal for level 6 and the bandwidth savings are significant.

Parallel Operations

Promise.all for Independent Queries

// SLOW — sequential queries (200ms + 150ms + 100ms = 450ms total)
app.get("/api/dashboard", function(req, res) {
  var data = {};

  db.query("SELECT count(*) FROM users")
    .then(function(result) {
      data.userCount = result.rows[0].count;
      return db.query("SELECT count(*) FROM articles WHERE status = 'published'");
    })
    .then(function(result) {
      data.articleCount = result.rows[0].count;
      return db.query("SELECT count(*) FROM orders WHERE created_at > NOW() - INTERVAL '24 hours'");
    })
    .then(function(result) {
      data.recentOrders = result.rows[0].count;
      res.json(data);
    });
});

// FAST — parallel queries (max(200ms, 150ms, 100ms) = 200ms total)
app.get("/api/dashboard", function(req, res) {
  Promise.all([
    db.query("SELECT count(*) FROM users"),
    db.query("SELECT count(*) FROM articles WHERE status = 'published'"),
    db.query("SELECT count(*) FROM orders WHERE created_at > NOW() - INTERVAL '24 hours'")
  ]).then(function(results) {
    res.json({
      userCount: results[0].rows[0].count,
      articleCount: results[1].rows[0].count,
      recentOrders: results[2].rows[0].count
    });
  });
});

Profiling

CPU Profiling

# Start the application with the inspector
node --inspect server.js

# Open Chrome DevTools
# Navigate to chrome://inspect
# Click "inspect" on your Node.js process
# Go to the "Profiler" tab
# Click "Start" to record
# Generate some traffic, then click "Stop"

Heap Profiling

// Take a heap snapshot programmatically
var v8 = require("v8");
var fs = require("fs");

function takeHeapSnapshot() {
  var snapshotStream = v8.writeHeapSnapshot();
  console.log("Heap snapshot written to:", snapshotStream);
}

// Trigger via API endpoint (admin only)
app.post("/admin/heap-snapshot", authorize("admin"), function(req, res) {
  var filename = v8.writeHeapSnapshot();
  res.json({ file: filename });
});

Load heap snapshots in Chrome DevTools to identify what is consuming memory.

Simple Request Timing

function timer(label) {
  var start = process.hrtime.bigint();

  return {
    end: function() {
      var end = process.hrtime.bigint();
      var ms = Number(end - start) / 1000000;
      console.log(label + ": " + ms.toFixed(2) + "ms");
      return ms;
    }
  };
}

app.get("/api/articles", function(req, res) {
  var t1 = timer("db_query");
  db.query("SELECT * FROM articles LIMIT 20")
    .then(function(result) {
      t1.end();
      var t2 = timer("serialization");
      var data = result.rows.map(serializeArticle);
      t2.end();
      res.json(data);
    });
});

Common Issues and Troubleshooting

Response times increase over time

Memory leak causes more frequent garbage collection:

Fix: Monitor heap usage. Take heap snapshots at different times and compare. Look for objects that grow in count over time. Common culprits: unbounded caches, event listener accumulation, closures holding large references.

All requests slow down simultaneously

The event loop is blocked by synchronous code:

Fix: Monitor event loop lag. Search for synchronous file reads (readFileSync), large JSON parsing, CPU-intensive loops, or synchronous cryptographic operations. Move heavy computation to worker threads.

Database queries are slow despite indexes

Connection pool exhaustion or missing ANALYZE:

Fix: Monitor pool statistics (waitingCount > 0 means pool is full). Run ANALYZE after data changes. Check EXPLAIN ANALYZE output for the slow query. Verify the connection pool max is appropriate for your workload.

Memory usage climbs steadily

A memory leak is present:

Fix: Use --inspect and Chrome DevTools heap profiler. Take two heap snapshots minutes apart and compare. Look for growing object counts. Common leaks: unclosed database connections, unremoved event listeners, growing caches.

Best Practices

  • Never block the event loop. Move CPU-intensive work to worker threads or break it into chunks with setImmediate. A blocked event loop affects every concurrent request.
  • Cache at every level. Database query results, computed values, and HTTP responses should all have appropriate caching. The fastest request is one that never reaches the database.
  • Use connection pooling for all external services. Database connections, Redis connections, and HTTP keep-alive connections reduce the overhead of establishing new connections.
  • Run queries in parallel when possible. Promise.all runs independent database queries concurrently. Sequential queries that do not depend on each other waste time.
  • Profile before optimizing. Use the Node.js inspector and Chrome DevTools to identify actual bottlenecks. Optimizing the wrong thing wastes time and adds complexity.
  • Set appropriate memory limits. Use --max-old-space-size to match your server's RAM. PM2's max_memory_restart catches leaks before they crash the process.
  • Compress HTTP responses. The compression middleware reduces response sizes by 60-80% with minimal CPU overhead.
  • Select only the columns you need. SELECT * transfers unnecessary data over the network and through the connection pool. Select specific columns.

References

Powered by Contentful