Docker Networking Deep Dive
A comprehensive guide to Docker networking concepts, drivers, and patterns for building multi-service Node.js applications with proper network isolation and service discovery.
Docker Networking Deep Dive
Docker networking determines how containers talk to each other, to the host machine, and to the outside world. Getting networking right is the difference between a smooth multi-service architecture and hours of debugging connection refused errors. This guide covers every Docker network driver, practical patterns for Node.js applications, and the debugging techniques I reach for when things break.
Prerequisites
- Docker Desktop v4.0+ or Docker Engine on Linux
- Docker Compose v2
- Basic understanding of TCP/IP, ports, and DNS
- Familiarity with Docker containers and images
Docker Network Drivers
Docker ships with several network drivers, each serving a specific purpose.
Bridge Network (Default)
Every Docker installation creates a default bridge network called bridge. Containers attached to it can reach each other by IP address but not by name.
# List networks
docker network ls
# Output:
# NETWORK ID NAME DRIVER SCOPE
# a1b2c3d4e5f6 bridge bridge local
# f6e5d4c3b2a1 host host local
# 1a2b3c4d5e6f none null local
The default bridge has limitations. Containers can only communicate via IP addresses, not container names. You have to manually link containers or use --link (deprecated). For any real work, create user-defined bridge networks.
User-Defined Bridge Networks
User-defined bridges are what you should use for almost everything.
# Create a network
docker network create myapp-network
# Run containers on it
docker run -d --name api --network myapp-network node:20-alpine sleep 3600
docker run -d --name postgres --network myapp-network postgres:16-alpine
Containers on a user-defined bridge get automatic DNS resolution. The api container can reach PostgreSQL at postgres:5432 — Docker's embedded DNS server resolves the container name to its IP.
// Inside the api container, this just works
var pg = require('pg');
var pool = new pg.Pool({
host: 'postgres', // Docker resolves this to the container's IP
port: 5432,
user: 'postgres',
password: 'secret',
database: 'myapp'
});
Host Network
The host driver removes network isolation entirely. The container shares the host's network stack.
docker run --network host node:20-alpine node -e "
var http = require('http');
http.createServer(function(req, res) {
res.end('Hello from host network');
}).listen(3000);
console.log('Listening on host port 3000');
"
No port mapping needed — port 3000 inside the container IS port 3000 on the host. This eliminates the NAT overhead and gives near-native network performance.
Use cases: performance-critical applications, containers that need to bind many dynamic ports. Downsides: no port isolation, only works on Linux (Docker Desktop routes through a VM).
None Network
The none driver disables all networking. The container gets only a loopback interface.
docker run --network none alpine ip addr
# Output:
# 1: lo: <LOOPBACK,UP,LOWER_UP> ...
# inet 127.0.0.1/8 scope host lo
Use this for batch processing jobs that need no network access — it is a security hardening measure.
Overlay Network
Overlay networks span multiple Docker hosts. They are the backbone of Docker Swarm service-to-service communication.
docker network create --driver overlay --attachable my-overlay
The --attachable flag lets standalone containers join the overlay. Without it, only Swarm services can use it. For single-host development, you will not need overlay networks.
Macvlan Network
Macvlan assigns a real MAC address to each container, making it appear as a physical device on the network.
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 \
my-macvlan
Use case: legacy applications that expect to be on a specific network segment or need to be discoverable via network scanning.
DNS Resolution and Service Discovery
Docker's embedded DNS server (127.0.0.11) resolves container names, network aliases, and service names.
# Check DNS resolution from inside a container
docker run --rm --network myapp-network alpine nslookup postgres
# Output:
# Server: 127.0.0.11
# Address: 127.0.0.11:53
#
# Non-authoritative answer:
# Name: postgres
# Address: 172.18.0.3
Network aliases let you give containers multiple DNS names:
docker run -d \
--name postgres-primary \
--network myapp-network \
--network-alias db \
--network-alias postgres \
postgres:16-alpine
Now the container responds to postgres-primary, db, and postgres.
In Docker Compose, every service automatically gets its service name as a DNS alias:
services:
api:
image: myapp:latest
networks:
- backend
database:
image: postgres:16-alpine
networks:
- backend
networks:
backend:
The api service can reach PostgreSQL at database:5432. Compose also creates aliases for <service>.<project> and the container name.
Port Mapping and Publishing
Port mapping connects a host port to a container port through Docker's NAT layer.
# Map host port 8080 to container port 3000
docker run -p 8080:3000 myapp
# Map to specific host interface
docker run -p 127.0.0.1:3000:3000 myapp
# Map a range
docker run -p 8000-8010:8000-8010 myapp
# Let Docker pick a random host port
docker run -p 3000 myapp
docker port <container_id> # shows the assigned port
Binding to 127.0.0.1 restricts access to the local machine. Binding to 0.0.0.0 (the default) exposes the port to all interfaces, including your LAN.
# docker-compose.yml
services:
api:
ports:
- "3000:3000" # All interfaces
- "127.0.0.1:9229:9229" # Localhost only (debug port)
postgres:
ports:
- "127.0.0.1:5432:5432" # Localhost only
Always bind database and debug ports to 127.0.0.1 in development. There is no reason to expose PostgreSQL to your entire network.
Container-to-Container Communication
Containers on the same user-defined network communicate directly without port publishing. Port mapping is only needed for host-to-container access.
services:
api:
build: .
ports:
- "3000:3000" # Exposed to host
networks:
- backend
worker:
build: ./worker
# No ports needed - only talks to redis and postgres
networks:
- backend
postgres:
image: postgres:16-alpine
# ports: only expose if you need host access for debugging
networks:
- backend
redis:
image: redis:7-alpine
networks:
- backend
networks:
backend:
The worker service communicates with postgres and redis by name. No ports are published because no external access is needed.
// worker/index.js
var pg = require('pg');
var redis = require('redis');
var pool = new pg.Pool({
host: 'postgres', // Docker DNS resolution
port: 5432,
user: 'worker',
password: 'secret',
database: 'myapp'
});
var redisClient = redis.createClient({
url: 'redis://redis:6379' // Service name as hostname
});
Connecting to Host Services
Sometimes your container needs to reach a service running on the host machine — maybe a locally running API or database.
On Docker Desktop (macOS/Windows):
var pool = new pg.Pool({
host: 'host.docker.internal', // Resolves to host machine
port: 5432,
user: 'postgres',
password: 'secret',
database: 'myapp'
});
On Linux, host.docker.internal does not exist by default. Add it in docker-compose:
services:
api:
extra_hosts:
- "host.docker.internal:host-gateway"
Or use the host network driver on Linux for direct access.
Network Isolation and Security
Multiple networks create isolation boundaries. Containers on different networks cannot communicate even if they are on the same host.
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
networks:
- frontend
api:
build: .
networks:
- frontend
- backend
worker:
build: ./worker
networks:
- backend
postgres:
image: postgres:16-alpine
networks:
- backend
redis:
image: redis:7-alpine
networks:
- backend
networks:
frontend:
backend:
In this setup:
nginxcan reachapi(both onfrontend)apican reachpostgresandredis(it is onbackend)nginxcannot reachpostgresorredis(not onbackend)workercannot be reached fromnginx(not onfrontend)
The api service acts as a bridge between the two networks. This mirrors a production architecture where the reverse proxy only talks to the application layer, never directly to the data layer.
Docker Compose Networking
Docker Compose creates a default network for each project automatically. The network name is <project-name>_default.
# Project in directory "myapp"
docker compose up -d
docker network ls
# Output includes: myapp_default
Every service joins this default network. For most projects, this is sufficient. Define custom networks when you need isolation.
# Explicit network configuration
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # No external access
The internal: true flag prevents containers on that network from reaching the internet. This is useful for database networks — there is no reason your PostgreSQL container needs to call external APIs.
Network aliases in Compose:
services:
postgres-primary:
image: postgres:16-alpine
networks:
backend:
aliases:
- db
- postgres
Debugging Network Issues
Inspect Network Configuration
# Show network details
docker network inspect myapp_backend
# Output (truncated):
# {
# "Name": "myapp_backend",
# "IPAM": {
# "Config": [{ "Subnet": "172.20.0.0/16", "Gateway": "172.20.0.1" }]
# },
# "Containers": {
# "abc123": { "Name": "myapp-api-1", "IPv4Address": "172.20.0.2/16" },
# "def456": { "Name": "myapp-postgres-1", "IPv4Address": "172.20.0.3/16" }
# }
# }
Test Connectivity from Inside a Container
# Exec into a running container
docker compose exec api sh
# Test DNS resolution
nslookup postgres
# Server: 127.0.0.11
# Name: postgres
# Address: 172.20.0.3
# Test TCP connectivity
nc -zv postgres 5432
# postgres (172.20.0.3:5432) open
# Test HTTP
wget -qO- http://api:3000/health
Use a Debug Container
When your application container does not have network tools, launch a dedicated debug container:
docker run --rm -it --network myapp_backend nicolaka/netshoot
# Inside netshoot, you have curl, dig, nslookup, tcpdump, iperf, etc.
dig postgres
curl http://api:3000/health
tcpdump -i eth0 port 5432
The netshoot image is built for network debugging and includes every tool you could need.
Check Port Bindings
docker compose ps
# NAME PORTS
# myapp-api-1 0.0.0.0:3000->3000/tcp
# myapp-postgres-1 127.0.0.1:5432->5432/tcp
# Check what's listening on a port from the host
# Linux/macOS:
ss -tlnp | grep 3000
# Windows:
netstat -ano | findstr 3000
Network Performance for Node.js
Network overhead in Docker is minimal on Linux (native bridge networking). On macOS and Windows, traffic passes through a VM, adding 1-3ms latency.
For performance-sensitive local development:
services:
api:
# Use host network on Linux for zero overhead
network_mode: host
For production containers, keep connections persistent. Opening new TCP connections per request adds latency.
// Connection pooling eliminates per-request connection overhead
var pool = new pg.Pool({
host: 'postgres',
port: 5432,
user: 'appuser',
password: 'secret',
database: 'myapp',
max: 20, // Maximum pool size
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 2000
});
// Redis with keep-alive
var redisClient = redis.createClient({
url: 'redis://redis:6379',
socket: {
keepAlive: 5000
}
});
Complete Working Example
A multi-service architecture with network isolation between frontend, API, and data layers.
# docker-compose.yml
version: "3.8"
services:
# Reverse proxy - only on frontend network
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
networks:
- frontend
depends_on:
- api
# API server - bridges frontend and backend
api:
build:
context: .
target: development
volumes:
- .:/app
- /app/node_modules
environment:
- DATABASE_URL=postgresql://appuser:secret@db:5432/myapp
- REDIS_URL=redis://cache:6379
networks:
- frontend
- backend
depends_on:
db:
condition: service_healthy
# Background worker - only on backend network
worker:
build:
context: ./worker
environment:
- DATABASE_URL=postgresql://appuser:secret@db:5432/myapp
- REDIS_URL=redis://cache:6379
networks:
- backend
depends_on:
db:
condition: service_healthy
# PostgreSQL - backend only, internal network
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
networks:
backend:
aliases:
- postgres
- database
healthcheck:
test: ["CMD-SHELL", "pg_isready -U appuser -d myapp"]
interval: 5s
timeout: 5s
retries: 5
# Redis - backend only
cache:
image: redis:7-alpine
command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
volumes:
- redisdata:/data
networks:
backend:
aliases:
- redis
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # No internet access for data layer
volumes:
pgdata:
redisdata:
# nginx.conf
upstream api {
server api:3000;
}
server {
listen 80;
location / {
proxy_pass http://api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
// Verify network isolation from the API container
var http = require('http');
// This works - api is on the backend network
var pg = require('pg');
var pool = new pg.Pool({
host: 'db', // resolves via backend network
port: 5432,
user: 'appuser',
password: 'secret',
database: 'myapp'
});
// Test connectivity to demonstrate isolation
function testConnectivity() {
pool.query('SELECT 1 as connected', function(err, result) {
if (err) {
console.error('DB connection failed:', err.message);
} else {
console.log('DB connected:', result.rows[0]);
}
});
// This would fail from the nginx container because
// nginx is not on the backend network
// Only api and worker can reach db and cache
}
testConnectivity();
Test the isolation:
# Start everything
docker compose up -d
# Verify nginx can reach api
docker compose exec nginx wget -qO- http://api:3000/health
# {"status":"healthy"}
# Verify nginx CANNOT reach the database
docker compose exec nginx wget -qO- http://db:5432 2>&1
# wget: bad address 'db'
# Verify api CAN reach the database
docker compose exec api sh -c "nc -zv db 5432"
# db (172.20.0.4:5432) open
# Verify backend network has no internet
docker compose exec db wget -qO- http://example.com 2>&1
# wget: bad address 'example.com'
Common Issues and Troubleshooting
1. Connection Refused Between Containers
Error: connect ECONNREFUSED 172.18.0.3:5432
The container is reachable (DNS resolved) but the service is not ready. Use health checks with depends_on:
depends_on:
postgres:
condition: service_healthy
Also check that the service inside the container is binding to 0.0.0.0, not 127.0.0.1. A service listening on localhost inside a container is unreachable from other containers.
2. DNS Resolution Failure
Error: getaddrinfo ENOTFOUND postgres
The container name is not resolvable. Verify both containers are on the same network:
docker network inspect myapp_backend | grep -A5 Containers
If using standalone docker run, ensure both containers specify --network myapp-network. In Docker Compose, check that services share a network definition.
3. Port Conflict with Host Services
Error response from daemon: driver failed programming external connectivity:
Bind for 0.0.0.0:5432 failed: port is already allocated
A host service is using the same port. Options:
- Stop the host service:
sudo systemctl stop postgresql - Remap the port:
"5433:5432" - Skip host mapping entirely if you only need container-to-container communication
4. Containers Cannot Reach External Services
Error: getaddrinfo EAI_AGAIN registry.npmjs.org
If using internal: true on a network, containers cannot reach the internet. Move the build step to a non-internal network, or remove internal: true during dependency installation.
Also check Docker Desktop DNS settings. Docker uses the host's DNS by default. If your corporate DNS blocks certain domains, configure Docker to use 8.8.8.8:
{
"dns": ["8.8.8.8", "8.8.4.4"]
}
Add this to Docker's daemon.json (Settings > Docker Engine in Docker Desktop).
5. Network Persists After Compose Down
docker compose down # Removes containers but keeps networks
docker compose down -v # Also removes volumes
docker network prune # Remove all unused networks
Stale networks can cause subnet conflicts. If you see Pool overlaps with other one on this address space, prune unused networks.
Best Practices
- Always use user-defined bridge networks. The default bridge lacks DNS resolution and is less secure. Define explicit networks in your docker-compose.yml.
- Use
internal: truefor data layer networks. Your database and cache containers rarely need internet access. Internal networks prevent accidental data exfiltration. - Bind sensitive ports to localhost only. Use
127.0.0.1:5432:5432instead of5432:5432for databases and debug ports. - Name your networks semantically. Use
frontend,backend,monitoring— notnetwork1,network2. Future you will thank present you. - Use network aliases for flexibility. An alias like
dblets you swap the underlying database container without changing application code. - Keep connection pools sized appropriately. Inside containers, 20 connections to PostgreSQL is typically sufficient. Over-provisioning pools exhausts database connections across replicas.
- Use the
netshootimage for debugging. It has every network tool you need and costs nothing when not running. - Document your network topology. A simple diagram showing which services can talk to which networks saves hours of debugging for new team members.