Multi-Architecture Docker Images
Build Docker images for multiple CPU architectures including AMD64 and ARM64, covering buildx, manifest lists, cross-compilation strategies, and CI/CD integration for Node.js applications.
Multi-Architecture Docker Images
ARM is everywhere. AWS Graviton instances cost 20% less than equivalent x86 instances. Apple Silicon Macs run ARM natively. Raspberry Pi clusters power edge computing. If your Docker images only support AMD64, you are either paying too much for compute or forcing emulation on ARM machines. Multi-architecture images solve this — one image tag, multiple architectures, and Docker automatically pulls the right one. This guide covers building, testing, and distributing multi-arch Node.js images.
Prerequisites
- Docker Desktop v4.0+ or Docker Engine with buildx plugin
- Docker Hub, GHCR, or another registry account
- Familiarity with Dockerfiles and multi-stage builds
- Node.js project ready for containerization
Understanding Multi-Architecture Images
A multi-architecture image is actually a manifest list — a single tag that points to multiple platform-specific images.
# Inspect a multi-arch image
docker manifest inspect node:20-alpine
{
"schemaVersion": 2,
"manifests": [
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:abc123...",
"platform": { "architecture": "amd64", "os": "linux" }
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:def456...",
"platform": { "architecture": "arm64", "os": "linux" }
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:ghi789...",
"platform": { "architecture": "arm", "os": "linux", "variant": "v7" }
}
]
}
When you docker pull node:20-alpine, Docker reads this manifest list and pulls the image matching your CPU architecture. No configuration needed.
Setting Up Docker Buildx
Buildx is Docker's CLI plugin for extended build capabilities, including multi-platform builds.
# Check if buildx is available
docker buildx version
# github.com/docker/buildx v0.12.0
# List available builders
docker buildx ls
# NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
# default docker running linux/amd64
The default builder only supports your native platform. Create a multi-platform builder:
# Create a builder with multi-platform support
docker buildx create \
--name multiarch \
--driver docker-container \
--platform linux/amd64,linux/arm64,linux/arm/v7 \
--use
# Bootstrap the builder (downloads QEMU emulators)
docker buildx inspect --bootstrap
# Verify platforms
docker buildx ls
# NAME/NODE DRIVER/ENDPOINT STATUS PLATFORMS
# multiarch docker-container running linux/amd64, linux/arm64, linux/arm/v7
The docker-container driver runs builds inside a BuildKit container, which supports QEMU emulation for cross-platform builds.
QEMU Setup
QEMU enables running ARM binaries on x86 hosts (and vice versa). Docker Desktop includes QEMU automatically. On Linux:
# Install QEMU user-static binaries
docker run --privileged --rm tonistiigi/binfmt --install all
# Verify
ls /proc/sys/fs/binfmt_misc/
# qemu-aarch64 qemu-arm qemu-riscv64 ...
Building Multi-Architecture Images
Basic Multi-Platform Build
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t myregistry.com/myapp:latest \
--push \
.
The --push flag is required because multi-platform images cannot be loaded into the local Docker image store (which only holds one platform). They must be pushed to a registry.
For local testing without pushing:
# Build for a specific platform and load locally
docker buildx build \
--platform linux/arm64 \
-t myapp:arm64 \
--load \
.
# Build for native platform and load
docker buildx build \
-t myapp:latest \
--load \
.
Dockerfile for Multi-Architecture
Most Node.js Dockerfiles work without modification:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
USER node
CMD ["node", "app.js"]
The node:20-alpine base image is already multi-arch. When buildx builds for ARM64, it pulls the ARM64 variant of node:20-alpine automatically.
Optimized Multi-Stage Build
# syntax=docker/dockerfile:1
# Use BUILDPLATFORM for build stage (runs natively, fast)
FROM --platform=$BUILDPLATFORM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm npm ci --only=production
COPY . .
# Final stage uses target platform
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app .
EXPOSE 3000
USER node
CMD ["node", "app.js"]
The --platform=$BUILDPLATFORM on the build stage is critical for performance. Without it, npm ci runs under QEMU emulation on cross-platform builds, which is 5-20x slower. With it, the build runs natively, and only the final image is for the target platform.
This works because Node.js dependencies without native modules are platform-independent. The node_modules from an AMD64 build work on ARM64 — JavaScript is JavaScript.
Handling Native Modules
Native modules (C/C++ addons) break the "build once, copy everywhere" pattern. They must be compiled for each target architecture.
Option 1: Let npm Install Handle It
# Don't use BUILDPLATFORM — let each platform build natively
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
CMD ["node", "app.js"]
This is simple but slow. QEMU emulates the entire npm ci process for non-native platforms.
Option 2: Platform-Specific Build Stages
# syntax=docker/dockerfile:1
FROM --platform=$BUILDPLATFORM node:20-alpine AS source
WORKDIR /app
COPY . .
# Build native modules for target platform
FROM node:20-alpine AS deps
ARG TARGETPLATFORM
WORKDIR /app
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm,id=npm-${TARGETPLATFORM} \
npm ci --only=production
FROM node:20-alpine AS production
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY --from=source /app/src ./src
COPY --from=source /app/app.js ./
EXPOSE 3000
USER node
CMD ["node", "app.js"]
The deps stage runs under emulation (slow), but only for npm ci. Source file copying runs natively (fast). The cache mount uses TARGETPLATFORM in its ID so AMD64 and ARM64 caches do not collide.
Option 3: Pre-Built Binaries
Some packages like sharp and esbuild distribute pre-built binaries for multiple platforms:
FROM --platform=$BUILDPLATFORM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
# sharp downloads pre-built binaries per platform
RUN npm ci --only=production --platform=linuxmusl-arm64
# or --platform=linuxmusl-x64
COPY . .
FROM node:20-alpine
WORKDIR /app
COPY --from=build /app .
CMD ["node", "app.js"]
Check your native dependencies' documentation for cross-platform installation options.
Testing Multi-Architecture Images
Local Testing with Platform Flag
# Test ARM64 image on AMD64 host (via QEMU)
docker run --platform linux/arm64 --rm myapp:latest node -e "
console.log('Architecture:', process.arch);
console.log('Platform:', process.platform);
"
# Architecture: arm64
# Platform: linux
# Test AMD64 image on ARM64 host (Apple Silicon)
docker run --platform linux/amd64 --rm myapp:latest node -e "
console.log('Architecture:', process.arch);
console.log('Platform:', process.platform);
"
# Architecture: x64
# Platform: linux
Automated Platform Testing
// scripts/test-platforms.js
var execSync = require('child_process').execSync;
var platforms = ['linux/amd64', 'linux/arm64'];
var image = process.argv[2] || 'myapp:latest';
var failed = false;
platforms.forEach(function(platform) {
console.log('\nTesting ' + platform + '...');
try {
var output = execSync(
'docker run --platform ' + platform + ' --rm ' + image +
' node -e "console.log(JSON.stringify({arch: process.arch, version: process.version}))"',
{ encoding: 'utf8' }
);
var result = JSON.parse(output.trim());
console.log(' OK: ' + result.arch + ' / ' + result.version);
} catch (err) {
console.error(' FAIL: ' + err.message);
failed = true;
}
});
if (failed) {
console.error('\nSome platforms failed!');
process.exit(1);
} else {
console.log('\nAll platforms passed.');
}
node scripts/test-platforms.js myregistry.com/myapp:latest
# Testing linux/amd64...
# OK: x64 / v20.11.0
# Testing linux/arm64...
# OK: arm64 / v20.11.0
# All platforms passed.
Verifying Manifest Lists
# Check what platforms an image supports
docker manifest inspect myregistry.com/myapp:latest | node -e "
var data = '';
process.stdin.on('data', function(c) { data += c; });
process.stdin.on('end', function() {
var manifest = JSON.parse(data);
manifest.manifests.forEach(function(m) {
console.log(m.platform.architecture + '/' + m.platform.os +
(m.platform.variant ? '/' + m.platform.variant : '') +
' (' + m.digest.substring(0, 19) + ')');
});
});
"
# amd64/linux (sha256:abc123def45)
# arm64/linux (sha256:ghi789jkl01)
CI/CD Integration
GitHub Actions
name: Multi-Arch Build
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: |
ghcr.io/${{ github.repository }}:${{ github.sha }}
ghcr.io/${{ github.repository }}:latest
cache-from: type=gha
cache-to: type=gha,mode=max
Azure Pipelines
trigger:
branches:
include: [main]
pool:
vmImage: ubuntu-latest
steps:
- task: DockerInstaller@0
inputs:
dockerVersion: '24.0'
- script: |
docker run --privileged --rm tonistiigi/binfmt --install all
docker buildx create --name multiarch --use
docker buildx inspect --bootstrap
displayName: 'Setup buildx'
- script: |
echo $(REGISTRY_PASSWORD) | docker login $(REGISTRY) -u $(REGISTRY_USER) --password-stdin
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t $(REGISTRY)/myapp:$(Build.SourceVersion) \
-t $(REGISTRY)/myapp:latest \
--push .
displayName: 'Build and push multi-arch'
Performance Considerations
Build Time Comparison
# Native AMD64 build on AMD64 host
time docker buildx build --platform linux/amd64 --load .
# real 0m35s
# ARM64 build on AMD64 host (QEMU emulation)
time docker buildx build --platform linux/arm64 --load .
# real 2m45s (5-8x slower)
# Both platforms
time docker buildx build --platform linux/amd64,linux/arm64 --push .
# real 3m10s (parallel, limited by slowest)
Speeding Up Cross-Platform Builds
Use
--platform=$BUILDPLATFORMfor non-native-module stages. Run JavaScript bundling, linting, and copying natively.Use BuildKit cache mounts. Cached npm downloads persist across builds:
RUN --mount=type=cache,target=/root/.npm npm ciUse CI runners matching target platforms. GitHub Actions offers
ubuntu-latest(AMD64) andubuntu-24.04-arm(ARM64). Build natively on each and create the manifest manually.Build platforms in parallel on separate runners:
jobs:
build-amd64:
runs-on: ubuntu-latest
steps:
- uses: docker/build-push-action@v5
with:
platforms: linux/amd64
tags: ghcr.io/myorg/myapp:amd64
push: true
build-arm64:
runs-on: ubuntu-24.04-arm
steps:
- uses: docker/build-push-action@v5
with:
platforms: linux/arm64
tags: ghcr.io/myorg/myapp:arm64
push: true
manifest:
needs: [build-amd64, build-arm64]
runs-on: ubuntu-latest
steps:
- run: |
docker manifest create ghcr.io/myorg/myapp:latest \
ghcr.io/myorg/myapp:amd64 \
ghcr.io/myorg/myapp:arm64
docker manifest push ghcr.io/myorg/myapp:latest
This eliminates QEMU entirely. Each platform builds natively, then a manifest list combines them.
Complete Working Example
# syntax=docker/dockerfile:1
# Build stage runs on build host's native architecture
FROM --platform=$BUILDPLATFORM node:20-alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "Building on $BUILDPLATFORM for $TARGETPLATFORM"
WORKDIR /app
COPY package*.json ./
# Pure JS deps can be installed natively (fast)
RUN --mount=type=cache,target=/root/.npm \
npm ci --only=production --ignore-scripts
COPY . .
# Run any build steps natively
RUN npm run build --if-present
# Native modules stage — must run on target platform
FROM node:20-alpine AS native-deps
WORKDIR /app
COPY package*.json ./
# Only install packages with native bindings
RUN --mount=type=cache,target=/root/.npm \
npm ci --only=production
# Production image
FROM node:20-alpine AS production
RUN apk add --no-cache dumb-init
WORKDIR /app
# Copy native node_modules from target-platform stage
COPY --from=native-deps /app/node_modules ./node_modules
# Copy everything else from native build stage
COPY --from=build /app/src ./src
COPY --from=build /app/app.js ./
COPY --from=build /app/views ./views
USER node
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
CMD ["dumb-init", "node", "app.js"]
# Build and push multi-arch
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t myregistry.com/myapp:1.0.0 \
-t myregistry.com/myapp:latest \
--push .
# Verify
docker manifest inspect myregistry.com/myapp:latest
// app.js - reports its architecture
var express = require('express');
var app = express();
app.get('/health', function(req, res) {
res.json({
status: 'healthy',
arch: process.arch,
platform: process.platform,
nodeVersion: process.version,
uptime: process.uptime()
});
});
app.listen(3000, function() {
console.log('Server running on port 3000 (' + process.arch + ')');
});
Common Issues and Troubleshooting
1. QEMU Segfault During Build
qemu-aarch64: Segmentation fault
npm ci failed with exit code 139
QEMU emulation is imperfect. Some native modules crash during cross-compilation. Solutions:
- Use native ARM runners instead of QEMU
- Use packages with pre-built binaries (sharp, esbuild)
- Split the build: compile native modules on ARM, everything else on AMD64
2. "image with reference was found but does not match the specified platform"
WARNING: image with reference myapp:latest was found but does not match
the specified platform: wanted linux/arm64, actual linux/amd64
You pulled or loaded a single-platform image. Use --platform to specify:
docker pull --platform linux/arm64 myregistry.com/myapp:latest
3. Manifest List Not Created
docker manifest inspect myapp:latest
# no such manifest
Multi-platform images require --push during build. The --load flag only supports single platforms. Push to a registry, then inspect.
4. Build Cache Not Shared Between Platforms
# ARM64 build doesn't use AMD64 cache
Cache keys include the platform. Each architecture has its own cache. Use platform-specific cache IDs:
RUN --mount=type=cache,target=/root/.npm,id=npm-$TARGETPLATFORM npm ci
Best Practices
- Always build for both AMD64 and ARM64. ARM adoption is accelerating. Future-proof your images now.
- Use
--platform=$BUILDPLATFORMfor non-native stages. JavaScript bundling, file copying, and linting do not need emulation. - Test on both architectures in CI. A passing AMD64 build does not guarantee ARM64 works, especially with native modules.
- Use native CI runners when available. QEMU is convenient but slow. Native ARM runners eliminate emulation entirely.
- Pin base image digests for reproducibility.
node:20-alpine@sha256:abc123ensures the exact same base on every build. - Include architecture in health check responses. Makes it easy to verify which image variant is running in production.
- Use BuildKit cache mounts per platform. Separate cache IDs prevent cross-platform cache corruption.
- Monitor performance differences between architectures. ARM and AMD64 may have different performance characteristics for your workload.