CI/CD with Containers: Build, Test, Deploy
Complete guide to building CI/CD pipelines for containerized Node.js applications, covering multi-stage builds, testing in containers, registry management, and deployment strategies.
CI/CD with Containers: Build, Test, Deploy
Containers transform CI/CD from "works on the build server" to "works everywhere." When your build, test, and production environments all run the same container image, you eliminate an entire class of deployment bugs. This guide covers the complete pipeline — from building optimized images and running tests inside containers to pushing to registries and deploying to production. Every pattern here is battle-tested with Node.js applications.
Prerequisites
- Docker and Docker Compose
- Familiarity with at least one CI/CD platform (GitHub Actions, Azure Pipelines, GitLab CI)
- A container registry account (Docker Hub, GitHub Container Registry, or cloud provider registry)
- Basic understanding of Dockerfiles and image layers
Building Container Images in CI
Multi-Stage Dockerfile for CI
A single Dockerfile should serve development, testing, and production. Multi-stage builds make this clean.
# Stage 1: Base with dependencies
FROM node:20-alpine AS base
WORKDIR /app
COPY package*.json ./
# Stage 2: Development (includes devDependencies)
FROM base AS development
RUN npm install
COPY . .
# Stage 3: Test runner
FROM development AS test
RUN npm run lint
RUN npm test
# Stage 4: Production dependencies only
FROM base AS production-deps
RUN npm ci --only=production
# Stage 5: Production image
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=production-deps /app/node_modules ./node_modules
COPY . .
EXPOSE 3000
USER node
CMD ["node", "app.js"]
The CI pipeline targets specific stages:
# Run tests (builds through test stage)
docker build --target test -t myapp:test .
# Build production image (skips test stage entirely)
docker build --target production -t myapp:prod .
The test stage runs linting and tests during the build. If tests fail, the build fails. The production stage does not depend on the test stage, so it does not include devDependencies or test files.
Build Caching Strategies
Docker layer caching dramatically speeds up CI builds. The key is ordering your Dockerfile so frequently-changing layers come last.
# Good: dependencies cached separately from source code
COPY package*.json ./
RUN npm ci
COPY . .
# Bad: any source change invalidates npm install cache
COPY . .
RUN npm ci
In CI platforms, enable cache mounting:
# Docker BuildKit with cache mount
DOCKER_BUILDKIT=1 docker build \
--cache-from myregistry.com/myapp:latest \
--build-arg BUILDKIT_INLINE_CACHE=1 \
-t myapp:latest .
GitHub Actions with Docker layer caching:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: ghcr.io/myorg/myapp:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max
The type=gha cache backend stores layers in GitHub Actions cache, surviving across workflow runs.
Image Tagging Strategy
Tag images with both the git SHA and semantic version:
# Tag with git SHA for traceability
docker build -t myregistry.com/myapp:${GIT_SHA} .
# Tag with semantic version for releases
docker tag myregistry.com/myapp:${GIT_SHA} myregistry.com/myapp:1.2.3
# Tag latest for convenience
docker tag myregistry.com/myapp:${GIT_SHA} myregistry.com/myapp:latest
Never deploy using the latest tag in production. Use the git SHA or a specific version so you can track exactly which code is running.
Testing in Containers
Unit Tests Inside the Build
The simplest approach runs tests as part of the Docker build:
FROM node:20-alpine AS test
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm test
If tests fail, the build fails — no broken images are ever pushed. The downside: test output is mixed into build output and harder to parse.
Integration Tests with Docker Compose
Integration tests need real databases and services. Docker Compose provides them.
# docker-compose.test.yml
version: "3.8"
services:
test:
build:
context: .
target: development
command: npm test
environment:
- NODE_ENV=test
- DATABASE_URL=postgresql://test:test@postgres:5432/myapp_test
- REDIS_URL=redis://redis:6379
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_started
postgres:
image: postgres:16-alpine
environment:
POSTGRES_USER: test
POSTGRES_PASSWORD: test
POSTGRES_DB: myapp_test
healthcheck:
test: ["CMD-SHELL", "pg_isready -U test"]
interval: 3s
timeout: 3s
retries: 10
tmpfs:
- /var/lib/postgresql/data # RAM disk for speed
redis:
image: redis:7-alpine
tmpfs:
- /data
Run the test suite:
# Run tests and exit with the test container's exit code
docker compose -f docker-compose.test.yml run --rm --build test
EXIT_CODE=$?
# Cleanup
docker compose -f docker-compose.test.yml down -v
exit $EXIT_CODE
Note the tmpfs mounts on the database containers. Tests run 3-5x faster with RAM-backed storage because there is no disk I/O.
Test Result Extraction
CI platforms need test results in JUnit XML format. Mount a volume to extract them:
services:
test:
volumes:
- ./test-results:/app/test-results
command: npx jest --ci --reporters=default --reporters=jest-junit
environment:
- JEST_JUNIT_OUTPUT_DIR=/app/test-results
# GitHub Actions
- name: Run tests
run: docker compose -f docker-compose.test.yml run --rm test
- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results
path: test-results/
Container Registry Management
Pushing to Registries
# Docker Hub
docker login -u $DOCKER_USER -p $DOCKER_TOKEN
docker push myuser/myapp:1.0.0
# GitHub Container Registry
echo $GITHUB_TOKEN | docker login ghcr.io -u $GITHUB_ACTOR --password-stdin
docker push ghcr.io/myorg/myapp:1.0.0
# AWS ECR
aws ecr get-login-password | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp:1.0.0
# Azure Container Registry
az acr login --name myregistry
docker push myregistry.azurecr.io/myapp:1.0.0
Image Scanning
Scan images for vulnerabilities before pushing to production:
# Docker Scout (built into Docker Desktop)
docker scout cves myapp:latest
# Trivy (open source)
trivy image myapp:latest
# Snyk
snyk container test myapp:latest
Integrate scanning into CI:
# GitHub Actions with Trivy
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: ghcr.io/myorg/myapp:${{ github.sha }}
format: table
exit-code: 1
severity: CRITICAL,HIGH
Setting exit-code: 1 fails the pipeline if critical or high vulnerabilities are found.
Pipeline Examples
GitHub Actions
name: CI/CD
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run tests
run: |
docker compose -f docker-compose.test.yml run --rm --build test
docker compose -f docker-compose.test.yml down -v
build-and-push:
needs: test
runs-on: ubuntu-latest
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,prefix=
type=raw,value=latest
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
target: production
push: true
tags: ${{ steps.meta.outputs.tags }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
needs: build-and-push
runs-on: ubuntu-latest
environment: production
steps:
- name: Deploy to production
run: |
curl -X POST "${{ secrets.DEPLOY_WEBHOOK_URL }}" \
-H "Authorization: Bearer ${{ secrets.DEPLOY_TOKEN }}" \
-H "Content-Type: application/json" \
-d '{"image": "${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ github.sha }}"}'
Azure Pipelines
trigger:
branches:
include:
- main
variables:
imageRepository: myapp
containerRegistry: myregistry.azurecr.io
dockerfilePath: Dockerfile
tag: $(Build.SourceVersion)
stages:
- stage: Test
jobs:
- job: IntegrationTests
pool:
vmImage: ubuntu-latest
steps:
- task: DockerCompose@0
inputs:
containerregistrytype: Container Registry
dockerComposeFile: docker-compose.test.yml
action: Run a specific service
serviceName: test
buildImages: true
detached: false
- stage: Build
dependsOn: Test
jobs:
- job: BuildAndPush
pool:
vmImage: ubuntu-latest
steps:
- task: Docker@2
inputs:
containerRegistry: myACRConnection
repository: $(imageRepository)
command: buildAndPush
Dockerfile: $(dockerfilePath)
buildContext: .
arguments: --target production
tags: |
$(tag)
latest
- stage: Deploy
dependsOn: Build
jobs:
- deployment: Production
pool:
vmImage: ubuntu-latest
environment: production
strategy:
runOnce:
deploy:
steps:
- script: |
az webapp config container set \
--name myapp \
--resource-group mygroup \
--container-image-name $(containerRegistry)/$(imageRepository):$(tag)
Deployment Strategies
Rolling Update
# Kubernetes deployment with rolling update
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
spec:
containers:
- name: api
image: myregistry.com/myapp:abc123
readinessProbe:
httpGet:
path: /health/ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
Rolling updates replace pods one at a time. maxUnavailable: 1 ensures at least 2 of 3 pods are always running. Readiness probes prevent traffic from routing to pods that have not finished starting.
Blue-Green with Docker Compose
For simpler setups without Kubernetes:
#!/bin/bash
# deploy.sh - Blue-Green deployment with Docker Compose
CURRENT=$(docker compose ps --format json | node -e "
var d = '';
process.stdin.on('data', function(c) { d += c; });
process.stdin.on('end', function() {
var lines = d.trim().split('\n');
lines.forEach(function(l) {
var c = JSON.parse(l);
if (c.Service === 'api-blue' && c.State === 'running') console.log('blue');
if (c.Service === 'api-green' && c.State === 'running') console.log('green');
});
});
")
if [ "$CURRENT" = "blue" ]; then
NEW="green"
else
NEW="blue"
fi
echo "Current: $CURRENT, Deploying: $NEW"
# Start new version
docker compose up -d api-$NEW
# Wait for health check
for i in $(seq 1 30); do
if curl -sf http://localhost:3001/health > /dev/null 2>&1; then
echo "New version healthy"
break
fi
sleep 2
done
# Switch traffic (update nginx upstream)
sed -i "s/api-$CURRENT/api-$NEW/" nginx.conf
docker compose exec nginx nginx -s reload
# Stop old version
docker compose stop api-$CURRENT
echo "Deployment complete: $NEW is live"
Automated Rollback
// scripts/deploy-check.js
var http = require('http');
var healthUrl = process.argv[2] || 'http://localhost:3000/health';
var maxRetries = parseInt(process.argv[3]) || 10;
var retryDelay = parseInt(process.argv[4]) || 5000;
var attempt = 0;
function check() {
attempt++;
console.log('Health check attempt ' + attempt + '/' + maxRetries);
http.get(healthUrl, function(res) {
if (res.statusCode === 200) {
console.log('Deployment healthy');
process.exit(0);
} else {
console.log('Unhealthy response: ' + res.statusCode);
retry();
}
}).on('error', function(err) {
console.log('Connection error: ' + err.message);
retry();
});
}
function retry() {
if (attempt >= maxRetries) {
console.error('Deployment failed health checks after ' + maxRetries + ' attempts');
console.error('Initiating rollback...');
process.exit(1);
}
setTimeout(check, retryDelay);
}
check();
# In CI pipeline
docker compose up -d api
node scripts/deploy-check.js http://localhost:3000/health 10 5000
if [ $? -ne 0 ]; then
echo "Rolling back..."
docker compose up -d --no-build api # Uses previous image
exit 1
fi
Complete Working Example
A full CI/CD pipeline from build to deploy:
# .github/workflows/deploy.yml
name: Build, Test, Deploy
on:
push:
branches: [main]
env:
REGISTRY: ghcr.io
IMAGE: ghcr.io/${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Start test services
run: docker compose -f docker-compose.test.yml up -d postgres redis
- name: Wait for services
run: |
for i in $(seq 1 30); do
docker compose -f docker-compose.test.yml exec postgres pg_isready && break
sleep 1
done
- name: Run tests
run: docker compose -f docker-compose.test.yml run --rm --build test
- name: Cleanup
if: always()
run: docker compose -f docker-compose.test.yml down -v
build:
needs: test
runs-on: ubuntu-latest
outputs:
image-tag: ${{ github.sha }}
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- name: Login to registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
target: production
push: true
tags: |
${{ env.IMAGE }}:${{ github.sha }}
${{ env.IMAGE }}:latest
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Scan image
uses: aquasecurity/trivy-action@master
with:
image-ref: ${{ env.IMAGE }}:${{ github.sha }}
format: table
exit-code: 1
severity: CRITICAL
deploy:
needs: build
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Deploy
run: |
ssh deploy@${{ secrets.SERVER_HOST }} << 'EOF'
docker pull ${{ env.IMAGE }}:${{ github.sha }}
cd /opt/myapp
IMAGE_TAG=${{ github.sha }} docker compose up -d --no-build
sleep 10
curl -sf http://localhost:3000/health || (docker compose rollback && exit 1)
EOF
Common Issues and Troubleshooting
1. Build Cache Not Working in CI
#6 [3/5] RUN npm ci
#6 sha256:abc123... 45.2s
# Every build takes 45+ seconds for npm install
CI runners start fresh each time. Enable remote caching:
- uses: docker/build-push-action@v5
with:
cache-from: type=gha
cache-to: type=gha,mode=max
Without this, every build downloads and installs all dependencies from scratch.
2. Tests Pass Locally but Fail in CI
Error: connect ECONNREFUSED 127.0.0.1:5432
In CI, services run in separate containers. Use service names, not localhost:
// Wrong for container-based CI
var pool = new pg.Pool({ host: 'localhost' });
// Correct
var pool = new pg.Pool({ host: process.env.DATABASE_HOST || 'postgres' });
3. Image Push Denied
denied: requested access to the resource is denied
Authentication issues. Verify your credentials and repository permissions:
# Check current auth
docker login ghcr.io
cat ~/.docker/config.json
# Verify image name matches registry permissions
# Must match: ghcr.io/YOUR_USERNAME/YOUR_REPO
4. Container Runs in CI but Exits Immediately
Container exited with code 1
No logs available
The entrypoint or CMD is wrong. Debug by running the container interactively:
docker run --rm -it myapp:latest sh
# Now you can manually run commands and see errors
Common causes: missing environment variables, wrong working directory, file permissions.
Best Practices
- Use multi-stage builds to separate concerns. Test, build, and production stages keep images lean and pipelines clear.
- Never push images that have not passed tests. Gate the push step behind a successful test job.
- Scan images for vulnerabilities before deployment. Block critical vulnerabilities from reaching production.
- Tag images with git SHA for traceability. You should always be able to map a running container back to a specific commit.
- Use tmpfs for test databases in CI. RAM-backed storage makes integration tests 3-5x faster.
- Cache aggressively. Layer caching, BuildKit cache mounts, and registry-based caching all reduce build times.
- Automate rollbacks with health checks. If the new deployment fails health checks, automatically revert to the previous image.
- Keep CI and local development Dockerfiles identical. One Dockerfile with multiple stages eliminates "works in CI but not locally" bugs.