Containerization

Local Development with Docker: Practical Patterns

Practical patterns and workflows for using Docker in local Node.js development, including live reloading, multi-service stacks, debugging, and performance optimization.

Local Development with Docker: Practical Patterns

Docker in production is table stakes. Docker for local development is where most teams still struggle. The gap between your local environment and production causes bugs that only surface during deployment, and "works on my machine" remains the most expensive phrase in software engineering. This article covers practical patterns I use daily to make Docker a first-class local development tool for Node.js projects.

Prerequisites

  • Docker Desktop installed (v4.0+)
  • Docker Compose v2
  • Node.js 18+ (for local tooling)
  • Basic familiarity with Dockerfiles and docker-compose.yml

Bind Mounts for Live Code Reloading

The single most important pattern for local Docker development is bind mounting your source code into the container. Without this, you rebuild the image on every code change.

# docker-compose.yml
services:
  api:
    build:
      context: .
      dockerfile: Dockerfile.dev
    volumes:
      - .:/app
      - /app/node_modules
    ports:
      - "3000:3000"

The second volume line (/app/node_modules) is critical. It creates an anonymous volume that prevents your host node_modules from overwriting the container's node_modules. Your host might be macOS or Windows, but the container runs Linux. Native modules compiled for the wrong platform will crash your app.

# Dockerfile.dev
FROM node:20-alpine

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY . .

CMD ["npx", "nodemon", "app.js"]

The COPY . . line seems redundant when you have a bind mount, but it ensures the image works standalone. The bind mount overlays this at runtime.

Hot Reload with Nodemon Inside Containers

Nodemon watches for file changes and restarts your Node.js process. Inside a Docker container, you need to configure it for polling mode on macOS and Windows because filesystem events do not propagate through the virtualization layer reliably.

{
  "watch": ["src", "routes", "models"],
  "ext": "js,json,pug",
  "ignore": ["node_modules", "static", "test"],
  "delay": "1000",
  "legacyWatch": true,
  "signal": "SIGTERM"
}

Save this as nodemon.json in your project root. The legacyWatch: true setting enables polling mode. The delay prevents rapid restarts when your editor writes multiple files. The signal: SIGTERM ensures graceful shutdown.

// app.js
var http = require('http');
var app = require('./src/server');

var server = http.createServer(app);
var port = process.env.PORT || 3000;

server.listen(port, function() {
  console.log('Server running on port ' + port);
});

process.on('SIGTERM', function() {
  console.log('SIGTERM received, shutting down gracefully');
  server.close(function() {
    process.exit(0);
  });
});

Multi-Stage Dockerfiles: Dev vs Production

One Dockerfile should serve both development and production. Multi-stage builds make this clean.

# Base stage with shared dependencies
FROM node:20-alpine AS base
WORKDIR /app
COPY package*.json ./

# Development stage
FROM base AS development
RUN npm install
COPY . .
EXPOSE 3000
EXPOSE 9229
CMD ["npx", "nodemon", "--inspect=0.0.0.0:9229", "app.js"]

# Production dependencies only
FROM base AS production-deps
RUN npm ci --only=production

# Production stage
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=production-deps /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
USER node
CMD ["node", "app.js"]

Target a specific stage in docker-compose:

services:
  api:
    build:
      context: .
      target: development
    volumes:
      - .:/app
      - /app/node_modules
    ports:
      - "3000:3000"
      - "9229:9229"

The development stage includes devDependencies (nodemon, test frameworks, linters) and exposes the debug port. The production stage runs npm ci --only=production and drops privileges to the node user.

Docker Compose for Multi-Service Stacks

Real applications need databases, caches, and message queues. Docker Compose orchestrates all of them.

version: "3.8"

services:
  api:
    build:
      context: .
      target: development
    volumes:
      - .:/app
      - /app/node_modules
    ports:
      - "3000:3000"
      - "9229:9229"
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgresql://devuser:devpass@postgres:5432/myapp_dev
      - REDIS_URL=redis://redis:6379
      - SMTP_HOST=mailhog
      - SMTP_PORT=1025
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_started

  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: devuser
      POSTGRES_PASSWORD: devpass
      POSTGRES_DB: myapp_dev
    volumes:
      - pgdata:/var/lib/postgresql/data
      - ./db/init.sql:/docker-entrypoint-initdb.d/01-schema.sql
      - ./db/seed.sql:/docker-entrypoint-initdb.d/02-seed.sql
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U devuser -d myapp_dev"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redisdata:/data

  mailhog:
    image: mailhog/mailhog:latest
    ports:
      - "1025:1025"
      - "8025:8025"

volumes:
  pgdata:
  redisdata:

Key points here: PostgreSQL has a health check so the API waits for it to be ready. The init.sql files in docker-entrypoint-initdb.d/ run automatically on first startup. MailHog captures all outgoing email on port 8025.

Environment Variable Management

Never hardcode credentials, even in development. Use .env files with docker-compose.

# .env
NODE_ENV=development
DATABASE_URL=postgresql://devuser:devpass@postgres:5432/myapp_dev
REDIS_URL=redis://redis:6379
SMTP_HOST=mailhog
SMTP_PORT=1025
JWT_SECRET=local-dev-secret-not-for-production
API_KEY=dev-api-key-12345
# docker-compose.yml
services:
  api:
    env_file:
      - .env

Keep a .env.example in version control with placeholder values. Add .env to .gitignore.

# .env.example
NODE_ENV=development
DATABASE_URL=postgresql://devuser:devpass@postgres:5432/myapp_dev
REDIS_URL=redis://redis:6379
SMTP_HOST=mailhog
SMTP_PORT=1025
JWT_SECRET=change-me
API_KEY=change-me

For multiple environments, use override files:

# Base configuration
docker-compose.yml

# Development overrides (loaded automatically)
docker-compose.override.yml

# Testing overrides
docker-compose.test.yml
# Run tests with test-specific overrides
docker compose -f docker-compose.yml -f docker-compose.test.yml up

Debugging Node.js Inside Containers

The --inspect flag enables the V8 debugger protocol. Map port 9229 from the container to your host.

# In the development stage
CMD ["npx", "nodemon", "--inspect=0.0.0.0:9229", "app.js"]

The 0.0.0.0 binding is essential. Without it, the debugger only listens on 127.0.0.1 inside the container, which is unreachable from your host.

VS Code launch configuration:

{
  "version": "0.2.0",
  "configurations": [
    {
      "name": "Docker: Attach to Node",
      "type": "node",
      "request": "attach",
      "port": 9229,
      "address": "localhost",
      "localRoot": "${workspaceFolder}",
      "remoteRoot": "/app",
      "restart": true,
      "skipFiles": ["<node_internals>/**"]
    }
  ]
}

The restart: true setting automatically reconnects when nodemon restarts the process. The localRoot/remoteRoot mapping ensures breakpoints align between your editor and the container filesystem.

For one-off debugging sessions:

docker compose exec api node --inspect-brk=0.0.0.0:9229 scripts/migrate.js

The --inspect-brk flag pauses execution on the first line, giving you time to attach the debugger.

Database Seeding and Fixture Management

PostgreSQL's docker-entrypoint-initdb.d directory runs .sql and .sh files alphabetically on first container creation only. For repeatable seeding, build a script.

// db/seed.js
var pg = require('pg');

var pool = new pg.Pool({
  connectionString: process.env.DATABASE_URL
});

var seeds = [
  {
    text: "INSERT INTO users (email, name, role) VALUES ($1, $2, $3) ON CONFLICT (email) DO NOTHING",
    values: ['[email protected]', 'Admin User', 'admin']
  },
  {
    text: "INSERT INTO users (email, name, role) VALUES ($1, $2, $3) ON CONFLICT (email) DO NOTHING",
    values: ['[email protected]', 'Dev User', 'developer']
  },
  {
    text: "INSERT INTO categories (name, slug) VALUES ($1, $2) ON CONFLICT (slug) DO NOTHING",
    values: ['Technology', 'technology']
  },
  {
    text: "INSERT INTO categories (name, slug) VALUES ($1, $2) ON CONFLICT (slug) DO NOTHING",
    values: ['Engineering', 'engineering']
  }
];

function runSeeds() {
  var completed = 0;

  seeds.forEach(function(seed) {
    pool.query(seed.text, seed.values, function(err) {
      if (err) {
        console.error('Seed error:', err.message);
      }
      completed++;
      if (completed === seeds.length) {
        console.log('Seeding complete: ' + completed + ' records processed');
        pool.end();
      }
    });
  });
}

pool.query('SELECT 1', function(err) {
  if (err) {
    console.error('Database not ready:', err.message);
    process.exit(1);
  }
  runSeeds();
});

Add a seed command to docker-compose:

docker compose exec api node db/seed.js

For full database resets during development:

# Drop and recreate
docker compose down -v  # removes volumes
docker compose up -d    # recreates with fresh init scripts

Development-Only Services

Local development benefits from services you would never run in production.

services:
  # Email capture - catches all SMTP traffic
  mailhog:
    image: mailhog/mailhog:latest
    ports:
      - "1025:1025"   # SMTP
      - "8025:8025"   # Web UI

  # Database admin UI
  adminer:
    image: adminer:latest
    ports:
      - "8080:8080"
    environment:
      ADMINER_DEFAULT_SERVER: postgres

  # Redis GUI
  redis-commander:
    image: rediscommander/redis-commander:latest
    environment:
      REDIS_HOSTS: local:redis:6379
    ports:
      - "8081:8081"

MailHog is indispensable. Configure your app's SMTP settings to point at MailHog and every email your app sends appears in the web UI at http://localhost:8025. No more accidentally sending test emails to real users.

Adminer provides a lightweight database GUI at http://localhost:8080. Redis Commander shows your cache contents at http://localhost:8081.

Put these in docker-compose.override.yml so they load automatically in development but not in CI or production.

Performance Tips for Docker Desktop

Docker Desktop on macOS and Windows runs containers inside a Linux VM. Filesystem operations through bind mounts cross the VM boundary and are significantly slower than native Linux.

Use targeted bind mounts instead of mounting the entire project:

volumes:
  - ./src:/app/src
  - ./routes:/app/routes
  - ./models:/app/models
  - ./views:/app/views
  - ./app.js:/app/app.js

This reduces the number of files Docker watches and speeds up filesystem operations.

Optimize .dockerignore:

node_modules
.git
.env
*.md
test
coverage
.nyc_output
.vscode
.idea

Enable Docker Desktop's VirtioFS file sharing (macOS). It is 2-5x faster than the default gRPC-FUSE. Check Docker Desktop > Settings > General > File sharing implementation.

Use named volumes for heavy write paths:

volumes:
  - .:/app
  - /app/node_modules     # anonymous volume
  - logs:/app/logs         # named volume for logs
  - uploads:/app/uploads   # named volume for uploads

Named volumes live inside the VM and have near-native performance.

Limit build context size. A large .git directory or node_modules folder in the build context slows every build. The .dockerignore file is your first line of defense.

# Check your build context size
du -sh --exclude=node_modules --exclude=.git .
# Target: under 50MB for fast builds

Local SSL/TLS with Self-Signed Certificates

Some features require HTTPS locally — secure cookies, service workers, OAuth callbacks.

# Generate self-signed certificate
mkdir -p certs
openssl req -x509 -nodes -days 365 \
  -newkey rsa:2048 \
  -keyout certs/localhost.key \
  -out certs/localhost.crt \
  -subj "/CN=localhost"

Use an nginx reverse proxy in front of your Node.js app:

services:
  nginx:
    image: nginx:alpine
    ports:
      - "443:443"
    volumes:
      - ./certs:/etc/nginx/certs:ro
      - ./nginx.dev.conf:/etc/nginx/conf.d/default.conf:ro
    depends_on:
      - api
# nginx.dev.conf
server {
    listen 443 ssl;
    server_name localhost;

    ssl_certificate /etc/nginx/certs/localhost.crt;
    ssl_certificate_key /etc/nginx/certs/localhost.key;

    location / {
        proxy_pass http://api:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Add certs/ to .gitignore. Each developer generates their own.

Complete Working Example

Here is a full local development setup for an Express.js application with PostgreSQL, Redis, and MailHog.

# docker-compose.yml
version: "3.8"

services:
  api:
    build:
      context: .
      target: development
    volumes:
      - .:/app
      - /app/node_modules
    ports:
      - "3000:3000"
      - "9229:9229"
    env_file:
      - .env
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_started
    restart: unless-stopped

  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: devuser
      POSTGRES_PASSWORD: devpass
      POSTGRES_DB: myapp_dev
    volumes:
      - pgdata:/var/lib/postgresql/data
      - ./db/schema.sql:/docker-entrypoint-initdb.d/01-schema.sql
      - ./db/seed.sql:/docker-entrypoint-initdb.d/02-seed.sql
    ports:
      - "5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U devuser -d myapp_dev"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
    ports:
      - "6379:6379"
    volumes:
      - redisdata:/data

  mailhog:
    image: mailhog/mailhog:latest
    ports:
      - "1025:1025"
      - "8025:8025"

  adminer:
    image: adminer:latest
    ports:
      - "8080:8080"
    environment:
      ADMINER_DEFAULT_SERVER: postgres

volumes:
  pgdata:
  redisdata:
# Dockerfile
FROM node:20-alpine AS base
WORKDIR /app
COPY package*.json ./

FROM base AS development
RUN npm install
COPY . .
EXPOSE 3000 9229
CMD ["npx", "nodemon", "--inspect=0.0.0.0:9229", "app.js"]

FROM base AS production-deps
RUN npm ci --only=production

FROM node:20-alpine AS production
WORKDIR /app
COPY --from=production-deps /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
USER node
CMD ["node", "app.js"]
// app.js
var express = require('express');
var pg = require('pg');
var redis = require('redis');
var nodemailer = require('nodemailer');

var app = express();
app.use(express.json());

// Database connection
var pool = new pg.Pool({
  connectionString: process.env.DATABASE_URL
});

// Redis connection
var redisClient = redis.createClient({
  url: process.env.REDIS_URL
});

redisClient.on('error', function(err) {
  console.error('Redis error:', err.message);
});

redisClient.connect();

// Mail transport (MailHog in development)
var transporter = nodemailer.createTransport({
  host: process.env.SMTP_HOST,
  port: parseInt(process.env.SMTP_PORT),
  ignoreTLS: true
});

app.get('/health', function(req, res) {
  pool.query('SELECT 1', function(err) {
    if (err) {
      return res.status(503).json({ status: 'unhealthy', db: err.message });
    }
    res.json({ status: 'healthy', uptime: process.uptime() });
  });
});

app.get('/api/users', function(req, res) {
  var cacheKey = 'users:all';

  redisClient.get(cacheKey).then(function(cached) {
    if (cached) {
      return res.json(JSON.parse(cached));
    }

    pool.query('SELECT id, email, name FROM users ORDER BY name', function(err, result) {
      if (err) {
        return res.status(500).json({ error: err.message });
      }
      redisClient.setEx(cacheKey, 300, JSON.stringify(result.rows));
      res.json(result.rows);
    });
  });
});

app.post('/api/invite', function(req, res) {
  var email = req.body.email;

  transporter.sendMail({
    from: '[email protected]',
    to: email,
    subject: 'You are invited!',
    html: '<h1>Welcome</h1><p>Click here to join.</p>'
  }, function(err) {
    if (err) {
      return res.status(500).json({ error: err.message });
    }
    res.json({ sent: true });
  });
});

var port = process.env.PORT || 3000;
var server = app.listen(port, function() {
  console.log('Server running on port ' + port);
});

process.on('SIGTERM', function() {
  console.log('Shutting down...');
  server.close(function() {
    pool.end();
    redisClient.quit();
    process.exit(0);
  });
});

module.exports = app;

Start everything with one command:

docker compose up --build

Access points:

Common Issues and Troubleshooting

1. node_modules Platform Mismatch

Error: /app/node_modules/bcrypt/lib/binding/napi-v3/bcrypt_lib.node: invalid ELF header

Your host node_modules leaked into the container via bind mount. Ensure the anonymous volume trick is in place:

volumes:
  - .:/app
  - /app/node_modules  # This line prevents host node_modules from being used

If the problem persists, remove the host node_modules and rebuild:

rm -rf node_modules
docker compose up --build

2. File Changes Not Detected

[nodemon] watching extensions: js,json,pug
[nodemon] watching 245 files
# ...but changes are not triggering restarts

On macOS or Windows, enable polling in nodemon.json:

{
  "legacyWatch": true,
  "delay": "1000"
}

Alternatively, set the environment variable:

environment:
  - CHOKIDAR_USEPOLLING=true

3. Port Already in Use

Error response from daemon: driver failed programming external connectivity:
Bind for 0.0.0.0:5432 failed: port is already allocated

A local PostgreSQL instance is using port 5432. Either stop it or remap:

ports:
  - "5433:5432"  # Use 5433 on host, 5432 in container

Update your .env accordingly:

DATABASE_URL=postgresql://devuser:devpass@localhost:5433/myapp_dev

4. Database Initialization Scripts Not Running

docker compose up -d postgres
# Schema tables are missing

Init scripts in /docker-entrypoint-initdb.d/ only run when the data volume is empty. If you changed your schema:

docker compose down -v   # Remove volumes
docker compose up -d     # Recreate from scratch

5. Container Exits Immediately

api_1  | exited with code 1

Check logs for the actual error:

docker compose logs api

Common causes: missing environment variables, syntax errors in code, port conflicts inside the container. Add restart: unless-stopped to auto-recover from transient failures during development.

Best Practices

  • Use .dockerignore religiously. Exclude node_modules, .git, coverage, and test output. A smaller build context means faster builds.
  • Never use latest tags for database images. Pin to specific versions like postgres:16-alpine. An unexpected upgrade can break your schema.
  • Mount only what you need. Targeted bind mounts (./src:/app/src) outperform full project mounts on macOS and Windows.
  • Use health checks with depends_on. The condition: service_healthy option prevents your app from starting before the database is ready, eliminating race condition crashes.
  • Keep development and production Dockerfiles in sync. Multi-stage builds with a shared base stage ensure both environments use the same Node.js version and base dependencies.
  • Commit docker-compose.yml but not .env. Commit .env.example with safe defaults so new team members can get started immediately.
  • Use named volumes for database data. Anonymous volumes are easy to lose. Named volumes survive docker compose down (without -v).
  • Run docker system prune weekly. Dangling images and stopped containers accumulate fast. Add --volumes cautiously — it removes unused named volumes too.
  • Add a Makefile or scripts for common commands. make dev, make seed, make test are easier to remember than long docker compose commands.

References

Powered by Contentful