Node.js File System Operations: Async Patterns
A comprehensive guide to Node.js file system operations covering async patterns, streams, directory management, file watching, and cross-platform file handling.
Node.js File System Operations: Async Patterns
Overview
File system operations are one of the most common tasks in any Node.js application, yet they are also one of the most commonly mishandled. I have seen production systems brought down by synchronous file reads blocking the event loop, data corruption from non-atomic writes, and memory exhaustion from loading multi-gigabyte files into a single buffer. These are not edge cases — they are the default outcome when developers treat the fs module casually.
Node.js provides three distinct APIs for file system access: the original callback-based API, synchronous methods, and the modern promise-based API. Each has its place, but for server-side applications handling concurrent requests, async patterns are not optional — they are mandatory. This article covers every major file system operation you will encounter in production, with working code, real-world patterns, and the hard-won opinions that come from a decade of building Node.js systems.
Prerequisites
- Node.js v14 or later (v16+ recommended for full
fs/promisessupport) - Basic understanding of Node.js streams and event emitters
- Familiarity with Promises and async/await patterns
- A working Node.js development environment
The fs Module: Three APIs
The fs module ships with Node.js core and offers three flavors of every operation:
var fs = require("fs");
var fsPromises = require("fs/promises");
// 1. Callback API (original)
fs.readFile("/tmp/data.txt", "utf8", function(err, data) {
if (err) throw err;
console.log(data);
});
// 2. Synchronous API (blocks the event loop)
var data = fs.readFileSync("/tmp/data.txt", "utf8");
console.log(data);
// 3. Promise API (modern, recommended)
fsPromises.readFile("/tmp/data.txt", "utf8").then(function(data) {
console.log(data);
});
My recommendation: Use fs/promises for virtually everything. The callback API leads to deeply nested code. The synchronous API blocks the event loop — acceptable during startup (loading config files) but never during request handling. The promise API works naturally with async/await and integrates cleanly with modern error handling patterns.
fs.promises vs util.promisify
Before Node.js v10 introduced fs/promises, the standard approach was wrapping callback functions with util.promisify:
var util = require("util");
var fs = require("fs");
var readFile = util.promisify(fs.readFile);
var writeFile = util.promisify(fs.writeFile);
async function loadConfig() {
var raw = await readFile("./config.json", "utf8");
return JSON.parse(raw);
}
This still works and you will see it in older codebases. However, require("fs/promises") is the cleaner approach now. The returned functions are identical in behavior — both return native Promises — but fs/promises avoids the boilerplate and gives you the full API surface without wrapping each function individually.
One gotcha: fs/promises was not available as a standalone import until Node.js v14. If you are maintaining a legacy codebase on Node.js v12, stick with util.promisify.
Reading Files
readFile — Small to Medium Files
For files that fit comfortably in memory (configuration, templates, small datasets), readFile is the right choice:
var fsPromises = require("fs/promises");
async function readJSON(filePath) {
try {
var content = await fsPromises.readFile(filePath, "utf8");
return JSON.parse(content);
} catch (err) {
if (err.code === "ENOENT") {
console.error("File not found:", filePath);
return null;
}
throw err;
}
}
Always specify the encoding ("utf8") when reading text files. Without it, readFile returns a raw Buffer, which is useful for binary data but requires an explicit .toString() call for text.
createReadStream — Large Files
For files larger than a few megabytes, streaming is not a suggestion — it is a requirement. readFile loads the entire file into memory. A 2 GB log file will consume 2 GB of heap, likely crashing your process.
var fs = require("fs");
function processLargeFile(filePath) {
return new Promise(function(resolve, reject) {
var lineCount = 0;
var stream = fs.createReadStream(filePath, { encoding: "utf8", highWaterMark: 64 * 1024 });
stream.on("data", function(chunk) {
lineCount += chunk.split("\n").length - 1;
});
stream.on("end", function() {
resolve(lineCount);
});
stream.on("error", function(err) {
reject(err);
});
});
}
The highWaterMark option controls the internal buffer size (default is 64 KB). Increase it for sequential reads where throughput matters; decrease it for memory-constrained environments.
readline — Line-by-Line Processing
When you need to process a file line by line — log parsing, CSV processing, configuration files with comments — the readline module pairs well with streams:
var fs = require("fs");
var readline = require("readline");
async function parseCSV(filePath) {
var results = [];
var input = fs.createReadStream(filePath, "utf8");
var rl = readline.createInterface({ input: input, crlfDelay: Infinity });
var lineNumber = 0;
var headers = null;
for await (var line of rl) {
lineNumber++;
if (lineNumber === 1) {
headers = line.split(",");
continue;
}
var values = line.split(",");
var row = {};
headers.forEach(function(header, i) {
row[header.trim()] = values[i] ? values[i].trim() : "";
});
results.push(row);
}
return results;
}
The crlfDelay: Infinity option treats \r\n as a single line break, which is critical for cross-platform compatibility — CSV files from Windows will have \r\n endings.
Writing Files
writeFile — Simple Writes
var fsPromises = require("fs/promises");
async function saveJSON(filePath, data) {
var content = JSON.stringify(data, null, 2);
await fsPromises.writeFile(filePath, content, "utf8");
}
By default, writeFile creates the file if it does not exist and overwrites it if it does. Use the flag option to control behavior:
// Append instead of overwrite
await fsPromises.writeFile(filePath, data, { encoding: "utf8", flag: "a" });
// Fail if file already exists (exclusive creation)
await fsPromises.writeFile(filePath, data, { encoding: "utf8", flag: "wx" });
appendFile — Log-Style Writes
For append-only patterns (logging, audit trails), appendFile opens the file in append mode:
var fsPromises = require("fs/promises");
async function appendLog(logPath, message) {
var timestamp = new Date().toISOString();
var entry = timestamp + " " + message + "\n";
await fsPromises.appendFile(logPath, entry, "utf8");
}
For high-frequency logging, avoid appendFile in a loop — each call opens and closes the file handle. Use createWriteStream instead.
createWriteStream — High-Throughput Writes
var fs = require("fs");
function createLogger(logPath) {
var stream = fs.createWriteStream(logPath, { flags: "a", encoding: "utf8" });
return {
log: function(message) {
stream.write(new Date().toISOString() + " " + message + "\n");
},
close: function() {
return new Promise(function(resolve) {
stream.end(resolve);
});
}
};
}
var logger = createLogger("/var/log/app/events.log");
logger.log("Application started");
Write streams keep the file descriptor open and buffer writes internally, which is dramatically faster for high-frequency writes.
Directory Operations
var fsPromises = require("fs/promises");
var path = require("path");
// Create directory (recursive creates parent directories)
async function ensureDir(dirPath) {
await fsPromises.mkdir(dirPath, { recursive: true });
}
// List directory contents
async function listFiles(dirPath) {
var entries = await fsPromises.readdir(dirPath, { withFileTypes: true });
var files = entries.filter(function(e) { return e.isFile(); }).map(function(e) { return e.name; });
var dirs = entries.filter(function(e) { return e.isDirectory(); }).map(function(e) { return e.name; });
return { files: files, directories: dirs };
}
// Remove directory recursively (Node.js v14.14+)
async function removeDir(dirPath) {
await fsPromises.rm(dirPath, { recursive: true, force: true });
}
The withFileTypes: true option on readdir returns Dirent objects instead of strings, letting you distinguish files from directories without a separate stat call. This is a meaningful performance improvement when listing directories with thousands of entries.
Note: fs.rmdir with { recursive: true } is deprecated since Node.js v16. Use fs.rm instead.
File Metadata and Existence Checks
var fsPromises = require("fs/promises");
async function getFileInfo(filePath) {
try {
var stats = await fsPromises.stat(filePath);
return {
size: stats.size,
created: stats.birthtime,
modified: stats.mtime,
isFile: stats.isFile(),
isDirectory: stats.isDirectory(),
permissions: "0" + (stats.mode & parseInt("777", 8)).toString(8)
};
} catch (err) {
if (err.code === "ENOENT") return null;
throw err;
}
}
// Check if a file exists and is readable
async function isReadable(filePath) {
try {
await fsPromises.access(filePath, fsPromises.constants.R_OK);
return true;
} catch (err) {
return false;
}
}
Do not use fs.exists or fs.existsSync. They are deprecated and introduce race conditions — the file can be deleted between the existence check and the subsequent read. Instead, attempt the operation directly and handle the ENOENT error.
Watching Files
fs.watch — Built-in
The built-in fs.watch is available on all platforms but has inconsistent behavior:
var fs = require("fs");
var path = require("path");
var watcher = fs.watch("./data", { recursive: true }, function(eventType, filename) {
if (filename) {
console.log(eventType + ": " + filename);
}
});
watcher.on("error", function(err) {
console.error("Watcher error:", err);
});
// Clean up when done
process.on("SIGINT", function() {
watcher.close();
process.exit(0);
});
The recursive option only works on macOS and Windows. On Linux, you must watch each subdirectory individually or use a library.
chokidar — Production-Grade Watching
For production use, chokidar is the standard:
var chokidar = require("chokidar");
var watcher = chokidar.watch("./input", {
ignored: /(^|[\/\\])\../, // ignore dotfiles
persistent: true,
awaitWriteFinish: {
stabilityThreshold: 2000,
pollInterval: 100
}
});
watcher
.on("add", function(filePath) { console.log("File added:", filePath); })
.on("change", function(filePath) { console.log("File changed:", filePath); })
.on("unlink", function(filePath) { console.log("File removed:", filePath); })
.on("error", function(err) { console.error("Watcher error:", err); });
The awaitWriteFinish option is critical for real-world use. Without it, chokidar fires events while a file is still being written, leading to partial reads. The stability threshold waits until the file size has not changed for the specified duration before emitting the event.
Temporary Files and Directories
var fsPromises = require("fs/promises");
var os = require("os");
var path = require("path");
async function withTempDir(fn) {
var tmpDir = await fsPromises.mkdtemp(path.join(os.tmpdir(), "app-"));
try {
return await fn(tmpDir);
} finally {
await fsPromises.rm(tmpDir, { recursive: true, force: true });
}
}
// Usage
await withTempDir(async function(dir) {
var tmpFile = path.join(dir, "working.json");
await fsPromises.writeFile(tmpFile, JSON.stringify(data), "utf8");
// Process file...
// Temp directory is cleaned up automatically
});
The withTempDir pattern guarantees cleanup even if the operation throws. This is the correct way to handle temporary files — never leave cleanup to the caller.
Atomic File Writes
A direct writeFile to a target path is not atomic. If the process crashes mid-write, you are left with a corrupted file. The solution: write to a temporary file in the same directory, then rename.
var fsPromises = require("fs/promises");
var path = require("path");
var crypto = require("crypto");
async function atomicWrite(filePath, content) {
var dir = path.dirname(filePath);
var tmpName = "." + path.basename(filePath) + "." + crypto.randomBytes(6).toString("hex") + ".tmp";
var tmpPath = path.join(dir, tmpName);
try {
await fsPromises.writeFile(tmpPath, content, "utf8");
await fsPromises.rename(tmpPath, filePath);
} catch (err) {
// Clean up temp file on failure
try {
await fsPromises.unlink(tmpPath);
} catch (cleanupErr) {
// Ignore cleanup errors
}
throw err;
}
}
The key insight: rename is an atomic operation on the same filesystem. The file either has the old content or the new content — never a partial state. The temp file must be in the same directory (same filesystem) for rename to be atomic.
Path Module Integration
Never construct file paths with string concatenation. The path module handles platform differences:
var path = require("path");
// Joining paths
var configPath = path.join(__dirname, "config", "default.json");
// Resolving to absolute path
var absolute = path.resolve("./data/input.csv");
// Extracting components
var info = {
dir: path.dirname("/var/log/app/error.log"), // "/var/log/app"
base: path.basename("/var/log/app/error.log"), // "error.log"
ext: path.extname("/var/log/app/error.log"), // ".log"
name: path.basename("/var/log/app/error.log", ".log") // "error"
};
// Normalizing messy paths
var clean = path.normalize("/foo/bar/../baz/./qux"); // "/foo/baz/qux"
Cross-Platform Path Handling
Windows uses backslashes (\), Unix uses forward slashes (/). This matters more than most developers realize:
var path = require("path");
// Always use path.join — never concatenate with "/"
var good = path.join("data", "users", "shane.json");
var bad = "data/users/shane.json"; // works on Unix, fragile on Windows
// path.sep gives you the platform separator
console.log(path.sep); // "/" on Unix, "\\" on Windows
// Convert between formats
function toPosixPath(p) {
return p.split(path.sep).join("/");
}
// When comparing paths, normalize first
function pathsEqual(a, b) {
return path.resolve(a) === path.resolve(b);
}
When writing to logs or databases, always normalize to POSIX paths (forward slashes). Most systems (including Windows APIs) accept forward slashes, and it avoids escaping headaches.
File Locking Strategies
Node.js has no built-in file locking. For single-process applications, use a lock file:
var fsPromises = require("fs/promises");
var path = require("path");
async function acquireLock(filePath, timeout) {
var lockPath = filePath + ".lock";
var start = Date.now();
timeout = timeout || 10000;
while (true) {
try {
// wx flag: fail if lock file already exists
await fsPromises.writeFile(lockPath, String(process.pid), { flag: "wx" });
return lockPath;
} catch (err) {
if (err.code !== "EEXIST") throw err;
if (Date.now() - start > timeout) {
throw new Error("Lock acquisition timed out: " + filePath);
}
// Check for stale lock
try {
var lockContent = await fsPromises.readFile(lockPath, "utf8");
var lockPid = parseInt(lockContent, 10);
try {
process.kill(lockPid, 0); // Check if process exists
} catch (e) {
// Process is dead, remove stale lock
await fsPromises.unlink(lockPath);
continue;
}
} catch (readErr) {
// Lock file disappeared, retry
continue;
}
await new Promise(function(r) { setTimeout(r, 100); });
}
}
}
async function releaseLock(lockPath) {
try {
await fsPromises.unlink(lockPath);
} catch (err) {
if (err.code !== "ENOENT") throw err;
}
}
For multi-process or distributed scenarios, use a proper distributed lock (Redis, database advisory locks) rather than file-based locking.
Recursive Directory Traversal
var fsPromises = require("fs/promises");
var path = require("path");
async function walkDir(dirPath, callback) {
var entries = await fsPromises.readdir(dirPath, { withFileTypes: true });
var promises = entries.map(function(entry) {
var fullPath = path.join(dirPath, entry.name);
if (entry.isDirectory()) {
return walkDir(fullPath, callback);
} else if (entry.isFile()) {
return callback(fullPath, entry);
}
});
await Promise.all(promises);
}
// Usage: find all .json files
var jsonFiles = [];
await walkDir("./data", async function(filePath, entry) {
if (path.extname(filePath) === ".json") {
jsonFiles.push(filePath);
}
});
This traversal runs in parallel — all entries in a directory are processed concurrently. For directories with tens of thousands of files, you may want to limit concurrency to avoid file descriptor exhaustion.
File Permissions and Ownership
var fsPromises = require("fs/promises");
// Set file permissions (Unix-style octal)
await fsPromises.chmod("/var/app/config.json", 0o640); // rw-r-----
// Change ownership (requires root)
await fsPromises.chown("/var/app/data", 1000, 1000); // uid, gid
// Create file with specific permissions
async function writeSecure(filePath, content) {
var fd = await fsPromises.open(filePath, "w", 0o600);
try {
await fd.writeFile(content);
} finally {
await fd.close();
}
}
On Windows, chmod has limited effect — only the read-only flag is meaningful. If you need cross-platform permission control, use fs.access to check what you can actually do rather than relying on mode bits.
Error Handling Patterns
Robust error handling for file operations goes beyond a basic try/catch:
var fsPromises = require("fs/promises");
async function safeReadFile(filePath, defaultValue) {
try {
var content = await fsPromises.readFile(filePath, "utf8");
return content;
} catch (err) {
switch (err.code) {
case "ENOENT":
// File does not exist
return defaultValue !== undefined ? defaultValue : null;
case "EACCES":
// Permission denied
throw new Error("Permission denied reading " + filePath);
case "EISDIR":
// Path is a directory, not a file
throw new Error("Expected file but found directory: " + filePath);
case "EMFILE":
// Too many open files
throw new Error("File descriptor limit reached. Increase ulimit or reduce concurrency.");
default:
throw err;
}
}
}
The EMFILE error deserves special attention. Node.js applications that open many files concurrently (parallel processing, bulk imports) will hit the OS file descriptor limit. On Linux, the default is often 1024. Use the graceful-fs package as a drop-in replacement — it queues open calls when the limit is reached.
Large File Processing with Streams
Streams are the correct approach for processing files that do not fit in memory. Here is a pattern for transforming data through a pipeline:
var fs = require("fs");
var stream = require("stream");
var util = require("util");
var zlib = require("zlib");
var pipeline = util.promisify(stream.pipeline);
async function compressFile(inputPath, outputPath) {
await pipeline(
fs.createReadStream(inputPath),
zlib.createGzip(),
fs.createWriteStream(outputPath)
);
}
async function transformFile(inputPath, outputPath, transformFn) {
var transform = new stream.Transform({
transform: function(chunk, encoding, callback) {
try {
var result = transformFn(chunk.toString());
callback(null, result);
} catch (err) {
callback(err);
}
}
});
await pipeline(
fs.createReadStream(inputPath, "utf8"),
transform,
fs.createWriteStream(outputPath, "utf8")
);
}
Always use stream.pipeline (or util.promisify(stream.pipeline)) instead of manually piping. Pipeline handles error propagation and cleanup of all streams in the chain. Manual .pipe() calls leak resources when errors occur mid-stream.
Complete Working Example: CSV Processing Pipeline
Here is a production-ready utility that watches an input directory for CSV files, processes them, writes results to an output directory, and archives the originals:
var fs = require("fs");
var fsPromises = require("fs/promises");
var path = require("path");
var readline = require("readline");
var crypto = require("crypto");
var chokidar = require("chokidar");
// Configuration
var CONFIG = {
inputDir: path.resolve("./pipeline/input"),
outputDir: path.resolve("./pipeline/output"),
archiveDir: path.resolve("./pipeline/archive"),
errorDir: path.resolve("./pipeline/errors"),
pollStability: 2000
};
// Ensure all directories exist
async function initDirectories() {
var dirs = [CONFIG.inputDir, CONFIG.outputDir, CONFIG.archiveDir, CONFIG.errorDir];
for (var i = 0; i < dirs.length; i++) {
await fsPromises.mkdir(dirs[i], { recursive: true });
}
console.log("Directories initialized");
}
// Parse a CSV file line by line using streams
async function parseCSV(filePath) {
var input = fs.createReadStream(filePath, "utf8");
var rl = readline.createInterface({ input: input, crlfDelay: Infinity });
var headers = null;
var rows = [];
var lineNum = 0;
var errors = [];
for await (var line of rl) {
lineNum++;
if (!line.trim()) continue;
var fields = line.split(",").map(function(f) { return f.trim(); });
if (!headers) {
headers = fields;
continue;
}
if (fields.length !== headers.length) {
errors.push("Line " + lineNum + ": expected " + headers.length + " fields, got " + fields.length);
continue;
}
var row = {};
headers.forEach(function(h, idx) {
row[h] = fields[idx];
});
rows.push(row);
}
return { headers: headers, rows: rows, errors: errors };
}
// Transform data — example: normalize emails, compute derived fields
function transformRow(row) {
var transformed = {};
Object.keys(row).forEach(function(key) {
var normalizedKey = key.toLowerCase().replace(/\s+/g, "_");
var value = row[key];
// Normalize email fields
if (normalizedKey.indexOf("email") !== -1) {
value = value.toLowerCase().trim();
}
// Parse numeric fields
if (normalizedKey.indexOf("amount") !== -1 || normalizedKey.indexOf("price") !== -1) {
var num = parseFloat(value);
value = isNaN(num) ? value : num;
}
transformed[normalizedKey] = value;
});
// Add metadata
transformed._processed_at = new Date().toISOString();
transformed._checksum = crypto
.createHash("md5")
.update(JSON.stringify(row))
.digest("hex")
.substring(0, 8);
return transformed;
}
// Write results as JSON Lines (one JSON object per line)
async function writeResults(outputPath, rows) {
var tmpPath = outputPath + ".tmp";
var stream = fs.createWriteStream(tmpPath, "utf8");
return new Promise(function(resolve, reject) {
var i = 0;
function writeNext() {
var ok = true;
while (i < rows.length && ok) {
var line = JSON.stringify(rows[i]) + "\n";
i++;
if (i < rows.length) {
ok = stream.write(line);
} else {
stream.write(line);
}
}
if (i < rows.length) {
stream.once("drain", writeNext);
} else {
stream.end();
}
}
stream.on("finish", function() {
// Atomic rename: temp file -> final file
fsPromises.rename(tmpPath, outputPath)
.then(resolve)
.catch(reject);
});
stream.on("error", function(err) {
fsPromises.unlink(tmpPath).catch(function() {});
reject(err);
});
writeNext();
});
}
// Archive a processed file with timestamp
async function archiveFile(filePath) {
var basename = path.basename(filePath, path.extname(filePath));
var ext = path.extname(filePath);
var timestamp = new Date().toISOString().replace(/[:.]/g, "-");
var archiveName = basename + "_" + timestamp + ext;
var archivePath = path.join(CONFIG.archiveDir, archiveName);
await fsPromises.rename(filePath, archivePath);
return archivePath;
}
// Move a failed file to the error directory
async function moveToErrors(filePath, error) {
var errorName = path.basename(filePath);
var errorPath = path.join(CONFIG.errorDir, errorName);
await fsPromises.copyFile(filePath, errorPath);
await fsPromises.unlink(filePath);
// Write an error report alongside the file
var reportPath = errorPath + ".error.txt";
var report = "File: " + path.basename(filePath) + "\n";
report += "Time: " + new Date().toISOString() + "\n";
report += "Error: " + error.message + "\n";
report += "Stack: " + error.stack + "\n";
await fsPromises.writeFile(reportPath, report, "utf8");
}
// Process a single CSV file
async function processFile(filePath) {
var filename = path.basename(filePath);
console.log("Processing: " + filename);
// Verify it is still a .csv file and still exists
if (path.extname(filePath).toLowerCase() !== ".csv") {
console.log("Skipping non-CSV file: " + filename);
return;
}
try {
await fsPromises.access(filePath, fs.constants.R_OK);
} catch (err) {
console.log("File no longer accessible: " + filename);
return;
}
try {
// Parse
var parsed = await parseCSV(filePath);
if (!parsed.headers || parsed.rows.length === 0) {
throw new Error("CSV file is empty or has no data rows");
}
// Transform
var transformedRows = parsed.rows.map(transformRow);
// Build output filename
var outputName = path.basename(filePath, ".csv") + ".jsonl";
var outputPath = path.join(CONFIG.outputDir, outputName);
// Write results atomically
await writeResults(outputPath, transformedRows);
// Archive the original
var archivePath = await archiveFile(filePath);
// Summary
console.log(" Rows processed: " + transformedRows.length);
console.log(" Parse errors: " + parsed.errors.length);
console.log(" Output: " + outputName);
console.log(" Archived: " + path.basename(archivePath));
if (parsed.errors.length > 0) {
console.log(" Warnings:");
parsed.errors.forEach(function(e) { console.log(" " + e); });
}
} catch (err) {
console.error(" Failed: " + err.message);
await moveToErrors(filePath, err);
}
}
// Main: initialize and start watching
async function main() {
await initDirectories();
// Process any files already in the input directory
var existing = await fsPromises.readdir(CONFIG.inputDir);
for (var i = 0; i < existing.length; i++) {
var ext = path.extname(existing[i]).toLowerCase();
if (ext === ".csv") {
await processFile(path.join(CONFIG.inputDir, existing[i]));
}
}
// Watch for new files
var processing = {};
var watcher = chokidar.watch(CONFIG.inputDir, {
ignored: /(^|[\/\\])\../, // ignore dotfiles and temp files
persistent: true,
awaitWriteFinish: {
stabilityThreshold: CONFIG.pollStability,
pollInterval: 100
}
});
watcher.on("add", function(filePath) {
if (processing[filePath]) return;
processing[filePath] = true;
processFile(filePath).finally(function() {
delete processing[filePath];
});
});
watcher.on("error", function(err) {
console.error("Watcher error:", err);
});
console.log("Watching " + CONFIG.inputDir + " for CSV files...");
console.log("Press Ctrl+C to stop");
// Graceful shutdown
process.on("SIGINT", function() {
console.log("\nShutting down...");
watcher.close().then(function() {
console.log("Watcher closed. Goodbye.");
process.exit(0);
});
});
}
main().catch(function(err) {
console.error("Fatal error:", err);
process.exit(1);
});
This example demonstrates: streaming CSV parsing, data transformation, atomic writes (temp file + rename), file archiving with timestamps, error isolation (failed files move to an error directory with reports), file watching with write-finish detection, and graceful shutdown with cleanup.
Common Issues and Troubleshooting
1. EMFILE: too many open files. This happens when you process hundreds of files concurrently. Solution: limit concurrency with a semaphore pattern, or use graceful-fs which queues operations when file descriptors are exhausted. On Linux, check your limit with ulimit -n and increase it in /etc/security/limits.conf.
2. EPERM or EACCES on Windows. Windows locks files more aggressively than Unix. If another process (antivirus, editor, backup tool) has a file open, your Node.js process cannot write to it. Retry logic with exponential backoff is often the pragmatic solution. Also check that your Node.js process is not running as a restricted user.
3. File watcher fires multiple events for a single change. Editors like VS Code write files by saving to a temp file, deleting the original, and renaming — triggering three events. Use chokidar's awaitWriteFinish or debounce your event handler with a short delay (200-500ms).
4. Path too long errors on Windows. Windows has a 260-character path limit by default. Deep node_modules trees or nested output directories can hit this. Enable long paths in the Windows registry, or use \\?\ prefix for extended-length paths. Node.js v14+ handles this better, but it can still surface in edge cases.
5. ENOSPC: no space left on device. Monitor disk space before writing. For pipelines that process many files, implement a circuit breaker that pauses processing when available disk space drops below a threshold.
Best Practices
Always use async operations in request handlers. A single
readFileSyncin an Express route blocks every concurrent request. There are zero exceptions to this rule in production code.Stream files larger than 10 MB. If you cannot state with certainty that a file will always be small, use streams. The one time a user uploads a 500 MB file will be the one time your server crashes.
Use atomic writes for any data you cannot afford to lose. Write to a temp file in the same directory, then rename. This costs almost nothing in performance and prevents corruption from crashes or power failures.
Handle ENOENT gracefully — it is not always an error. A missing configuration file might mean "use defaults." A missing cache file means "rebuild the cache." Map error codes to application-level semantics rather than crashing.
Normalize paths before comparing them. Use
path.resolve()to convert relative paths to absolute, then compare. Two paths that look different as strings (./data/../data/file.txtand./data/file.txt) can reference the same file.Clean up resources in finally blocks. File handles, watchers, and streams must be closed even when errors occur. The
try/finallypattern andstream.pipelineensure resources are not leaked.Use
path.joinreligiously. Never construct paths with string concatenation and/. It works until someone deploys to Windows or passes a path with a trailing slash, and then it silently breaks.Set explicit encodings. Always pass
"utf8"(or the appropriate encoding) toreadFileandwriteFile. Omitting it returns a Buffer for reads and defaults to UTF-8 for writes — an asymmetry that causes subtle bugs.
References
- Node.js File System Documentation — Official API reference for all fs methods
- Node.js Path Module — Path manipulation utilities
- Node.js Stream Documentation — Streams API and pipeline patterns
- chokidar on npm — Cross-platform file watching library
- graceful-fs on npm — Drop-in replacement that handles EMFILE errors
- Node.js Readline Module — Line-by-line file reading interface