Bitbucket Pipelines for Node.js Projects
Complete guide to configuring Bitbucket Pipelines for Node.js applications with Docker builds, deployments, and Atlassian ecosystem integration
Bitbucket Pipelines for Node.js Projects
Bitbucket Pipelines is Atlassian's built-in CI/CD service that runs directly inside your Bitbucket repository. If your team already lives in the Atlassian ecosystem — Jira, Confluence, Bitbucket — Pipelines is the path of least resistance for automating your Node.js builds, tests, and deployments. Unlike Jenkins or standalone CI servers, there is nothing to install, nothing to maintain, and the configuration lives alongside your code in a single YAML file.
Prerequisites
Before diving in, make sure you have the following in place:
- A Bitbucket Cloud account with a repository containing a Node.js project
- Basic familiarity with YAML syntax
- A
package.jsonwith workingtestandbuildscripts - Docker Hub account (for container-based deployments)
- AWS account with IAM credentials (for the deployment example)
- Bitbucket Pipelines enabled in your repository settings (Settings > Pipelines > Enable)
The bitbucket-pipelines.yml File
Everything in Bitbucket Pipelines starts with a bitbucket-pipelines.yml file at the root of your repository. This file defines what happens when code is pushed, which branches trigger which workflows, and how your application gets deployed.
Here is the simplest possible pipeline for a Node.js project:
image: node:20
pipelines:
default:
- step:
name: Build and Test
caches:
- node
script:
- npm ci
- npm test
The image directive at the top sets the default Docker image for all steps. Every step runs in a fresh Docker container, which means your build environment is completely isolated and reproducible. The default pipeline runs on every push to every branch unless a more specific pipeline matches.
YAML Structure and Key Directives
The top-level keys you will work with most are:
image: node:20 # Default Docker image
options: # Global options
max-time: 30 # Max build time in minutes
size: 2x # Double memory (8GB instead of 4GB)
docker: true # Enable Docker daemon in builds
definitions: # Reusable components
caches: {} # Custom cache definitions
services: {} # Service containers (databases, etc.)
steps: [] # Reusable step definitions
pipelines:
default: [] # Runs on all branches (fallback)
branches: {} # Branch-specific pipelines
tags: {} # Tag-triggered pipelines
pull-requests: {} # PR-specific pipelines
custom: {} # Manually triggered pipelines
The indentation matters. YAML is whitespace-sensitive, and a misplaced space will break your pipeline with a cryptic parsing error.
Pipeline Definitions
Default Pipeline
The default pipeline is your catch-all. It runs on any push that does not match a branch, tag, or pull-request pattern:
pipelines:
default:
- step:
name: Run Tests
caches:
- node
script:
- npm ci
- npm test
Branch Pipelines
Branch pipelines let you run different workflows for different branches. This is where environment-specific deployments happen:
pipelines:
branches:
develop:
- step:
name: Test
script:
- npm ci
- npm test
- step:
name: Deploy to Staging
deployment: staging
script:
- npm ci --production
- pipe: atlassian/aws-elasticbeanstalk-deploy:1.0.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: "us-east-1"
APPLICATION_NAME: "my-node-app"
ENVIRONMENT_NAME: "my-node-app-staging"
ZIP_FILE: "app.zip"
master:
- step:
name: Test
script:
- npm ci
- npm test
- step:
name: Deploy to Production
deployment: production
trigger: manual
script:
- npm ci --production
- pipe: atlassian/aws-elasticbeanstalk-deploy:1.0.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: "us-east-1"
APPLICATION_NAME: "my-node-app"
ENVIRONMENT_NAME: "my-node-app-production"
ZIP_FILE: "app.zip"
Notice the trigger: manual on the production deployment step. This creates a gate — someone has to click a button in the Bitbucket UI to promote to production. That one line has saved me from accidental production deployments more times than I can count.
Tag Pipelines
Tag pipelines are perfect for release workflows. When you tag a commit, a specific pipeline runs:
pipelines:
tags:
'v*':
- step:
name: Build Release
script:
- npm ci
- npm run build
- npm pack
artifacts:
- "*.tgz"
- step:
name: Publish to NPM
trigger: manual
script:
- npm publish *.tgz
Custom Pipelines
Custom pipelines are manually triggered from the Bitbucket UI or via the API. They are useful for one-off tasks like database migrations or cache invalidation:
pipelines:
custom:
run-migrations:
- step:
name: Run Database Migrations
deployment: production
script:
- npm ci
- node scripts/migrate.js
seed-database:
- step:
name: Seed Test Data
deployment: staging
script:
- npm ci
- node scripts/seed.js
You can trigger custom pipelines from the command line using the Bitbucket API:
curl -X POST -u username:app_password \
-H "Content-Type: application/json" \
-d '{"target": {"ref_type": "branch", "type": "pipeline_ref_target", "ref_name": "master"}}' \
"https://api.bitbucket.org/2.0/repositories/{workspace}/{repo}/pipelines/" \
--data-raw '{"target":{"ref_type":"branch","type":"pipeline_ref_target","ref_name":"master"},"selector":{"type":"custom","pattern":"run-migrations"}}'
Step Configuration and Docker Images
Each step runs in its own Docker container. You can override the default image per step, which is essential when your pipeline needs different runtimes:
pipelines:
default:
- step:
name: Test on Node 18
image: node:18
script:
- node --version
- npm ci
- npm test
- step:
name: Test on Node 20
image: node:20
script:
- node --version
- npm ci
- npm test
- step:
name: Lint Dockerfile
image: hadolint/hadolint:latest
script:
- hadolint Dockerfile
Steps also support services — background containers that run alongside your step. This is how you add a database to your test pipeline:
definitions:
services:
mongo:
image: mongo:7
variables:
MONGO_INITDB_ROOT_USERNAME: test
MONGO_INITDB_ROOT_PASSWORD: test
redis:
image: redis:7
pipelines:
default:
- step:
name: Integration Tests
caches:
- node
services:
- mongo
- redis
script:
- npm ci
- npm run test:integration
Service containers are accessible via localhost on their default ports. MongoDB on 27017, Redis on 6379 — no special networking configuration needed.
Caching Strategies for Node.js
Caching is critical for pipeline performance. Without caching, every step re-downloads your entire node_modules directory, which can add 30-60 seconds to each step.
Bitbucket provides a built-in node cache that covers node_modules:
pipelines:
default:
- step:
caches:
- node
script:
- npm ci
- npm test
The built-in cache is decent, but you can define custom caches for more control:
definitions:
caches:
npm-cache: ~/.npm
cypress-cache: ~/.cache/Cypress
build-cache: dist
pipelines:
default:
- step:
caches:
- npm-cache
- cypress-cache
script:
- npm ci
- npx cypress run
A few things to know about Bitbucket caches:
- Caches expire after 7 days of not being used
- Maximum cache size is 1 GB per cache
- Caches are scoped to a branch — the first build on a new branch starts cold
- You can manually clear caches from the Pipelines settings page
- Using
npm ciinstead ofnpm installis essential — it respectspackage-lock.jsonexactly and produces deterministic builds
For monorepos or workspaces, consider caching at the workspace root:
definitions:
caches:
node-workspace: node_modules
packages-api: packages/api/node_modules
packages-web: packages/web/node_modules
Artifacts and Test Reports
Artifacts let you pass files between steps. Without artifacts, each step starts with only the repository files:
pipelines:
default:
- step:
name: Build
caches:
- node
script:
- npm ci
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
script:
- ls dist/ # Files from previous step are here
- ./deploy.sh
For test reports, Bitbucket has built-in support for JUnit XML format. Configure your test runner to output JUnit XML, and Bitbucket will display test results directly in the pipeline UI:
pipelines:
default:
- step:
name: Test
caches:
- node
script:
- npm ci
- npm test -- --reporter mocha-junit-reporter
artifacts:
- test-results/**
Here is a quick configuration snippet for Mocha to produce JUnit output:
// .mocharc.js
module.exports = {
reporter: 'mocha-junit-reporter',
reporterOptions: {
mochaFile: './test-results/results.xml',
toConsole: true
}
};
And for Jest:
// jest.config.js
module.exports = {
reporters: [
'default',
['jest-junit', {
outputDirectory: './test-results',
outputName: 'results.xml'
}]
]
};
Deployment Environments and Variables
Bitbucket Pipelines supports three types of variables:
- Workspace variables — shared across all repositories in a workspace
- Repository variables — available to all pipelines in a repository
- Deployment variables — scoped to a specific deployment environment
Deployment environments are defined in your repository settings and referenced in your pipeline steps:
pipelines:
branches:
master:
- step:
name: Deploy to Production
deployment: production
script:
- echo "Deploying to $DEPLOY_HOST"
- echo "Using database $DATABASE_URL"
The deployment keyword does two things: it grants access to environment-specific variables, and it creates a deployment record in Bitbucket's deployment dashboard.
Variables can be marked as secured, which means they are masked in logs and encrypted at rest. Always mark credentials, API keys, and connection strings as secured.
Here is a pattern I use for environment-specific configuration in Node.js:
// config/environments.js
var configs = {
staging: {
apiUrl: process.env.API_URL || 'https://staging-api.example.com',
logLevel: 'debug',
enableCache: false
},
production: {
apiUrl: process.env.API_URL || 'https://api.example.com',
logLevel: 'warn',
enableCache: true
}
};
var env = process.env.NODE_ENV || 'staging';
module.exports = configs[env] || configs.staging;
Pipes: Pre-Built Integrations
Pipes are Bitbucket's reusable integration building blocks. They are essentially pre-built Docker containers that handle common deployment tasks. Instead of writing 20 lines of AWS CLI commands, you use a pipe:
- step:
name: Deploy to S3
script:
- pipe: atlassian/aws-s3-deploy:1.1.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: "us-east-1"
S3_BUCKET: "my-frontend-bucket"
LOCAL_PATH: "dist"
ACL: "public-read"
Some pipes I use regularly in Node.js projects:
# Slack notifications
- pipe: atlassian/slack-notify:2.0.0
variables:
WEBHOOK_URL: $SLACK_WEBHOOK
MESSAGE: "Deployment to production complete"
# Docker build and push to ECR
- pipe: atlassian/aws-ecr-push-image:2.0.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: "us-east-1"
IMAGE_NAME: "my-node-app"
TAGS: "${BITBUCKET_COMMIT:0:7} latest"
# Deploy to ECS
- pipe: atlassian/aws-ecs-deploy:1.9.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: "us-east-1"
CLUSTER_NAME: "production-cluster"
SERVICE_NAME: "node-api-service"
TASK_DEFINITION: "task-definition.json"
You can browse the full pipe catalog at https://bitbucket.org/product/features/pipelines/integrations. There are over 100 pipes covering AWS, Azure, GCP, Heroku, Kubernetes, and many other platforms.
Parallel Steps and Stage Gates
Parallel steps run simultaneously to cut down your build time. Wrap steps in a parallel block:
pipelines:
default:
- parallel:
- step:
name: Unit Tests
caches:
- node
script:
- npm ci
- npm run test:unit
- step:
name: Lint
caches:
- node
script:
- npm ci
- npm run lint
- step:
name: Security Audit
caches:
- node
script:
- npm ci
- npm audit --production
- step:
name: Build
caches:
- node
script:
- npm ci
- npm run build
artifacts:
- dist/**
- step:
name: Deploy
trigger: manual
deployment: staging
script:
- ./deploy.sh
In this setup, unit tests, linting, and the security audit all run at the same time. The Build step only runs after all parallel steps succeed. The Deploy step requires a manual trigger — that is your stage gate.
On a typical Node.js project, parallel testing reduces a 6-minute pipeline to about 3 minutes. That adds up across a team.
Docker Build and Push Within Pipelines
Building Docker images in Pipelines requires enabling Docker-in-Docker via the options directive:
options:
docker: true
pipelines:
branches:
master:
- step:
name: Build and Push Docker Image
caches:
- node
- docker
script:
- npm ci
- npm test
- export IMAGE_TAG="${BITBUCKET_COMMIT:0:7}"
- docker build -t myregistry/my-node-app:$IMAGE_TAG .
- docker tag myregistry/my-node-app:$IMAGE_TAG myregistry/my-node-app:latest
- echo $DOCKER_HUB_PASSWORD | docker login -u $DOCKER_HUB_USERNAME --password-stdin
- docker push myregistry/my-node-app:$IMAGE_TAG
- docker push myregistry/my-node-app:latest
Here is a production-ready Dockerfile for a Node.js Express app:
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
FROM node:20-alpine
WORKDIR /app
RUN addgroup -g 1001 -S appgroup && \
adduser -S appuser -u 1001 -G appgroup
COPY --from=builder --chown=appuser:appgroup /app .
USER appuser
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s \
CMD wget --no-verbose --tries=1 --spider http://localhost:8080/health || exit 1
CMD ["node", "app.js"]
Caching Docker layers significantly speeds up builds. The built-in docker cache stores layers between builds, but for maximum speed, use multi-stage builds and order your COPY commands strategically — put package*.json before COPY . . so the npm ci layer is cached unless dependencies change.
Integration with Jira and Other Atlassian Tools
This is where Bitbucket Pipelines really shines compared to GitHub Actions or GitLab CI. The Atlassian integration is seamless.
Jira Issue Keys in Commits
When you include a Jira issue key in your commit messages or branch names, Bitbucket automatically links the build and deployment status to the Jira issue:
git checkout -b feature/PROJ-1234-add-user-auth
git commit -m "PROJ-1234: Add JWT authentication middleware"
With this convention, Jira shows:
- The build status directly on the issue
- Which environment the code has been deployed to
- A link back to the pipeline run
Jira Deployment Tracking
Add the Jira pipe to your deployment steps for rich deployment data in Jira:
- step:
name: Deploy and Update Jira
deployment: production
script:
- ./deploy.sh
- pipe: atlassian/jira-upload-deployment-info:0.1.0
variables:
CLOUD_ID: $JIRA_CLOUD_ID
CLIENT_ID: $JIRA_CLIENT_ID
CLIENT_SECRET: $JIRA_CLIENT_SECRET
ENVIRONMENT_TYPE: "production"
ENVIRONMENT: "Production"
Confluence Build Reports
You can push build metrics to Confluence pages for team dashboards:
- pipe: atlassian/confluence-publish:0.3.0
variables:
CLOUD_ID: $CONFLUENCE_CLOUD_ID
PAGE_ID: "123456"
CONTENT: "<h2>Latest Build</h2><p>Commit: ${BITBUCKET_COMMIT}</p><p>Status: Success</p>"
Self-Hosted Runners
Bitbucket Cloud supports self-hosted runners for situations where the standard build environment is not enough — GPU-heavy workloads, internal network access, or compliance requirements.
Setting up a self-hosted runner:
- Go to Repository Settings > Runners > Add Runner
- Choose your OS (Linux, macOS, Windows) and follow the installation steps
- Label your runner (e.g.,
self.hosted,gpu,internal) - Reference the label in your pipeline:
pipelines:
default:
- step:
name: Build on Self-Hosted
runs-on:
- self.hosted
- linux
script:
- npm ci
- npm test
Self-hosted runners run inside Docker by default, but they have access to local resources you configure. This is useful for connecting to internal databases or services that are not exposed to the internet.
A word of caution: self-hosted runners require you to manage updates, security patches, and capacity. For most Node.js projects, the standard cloud runners are more than adequate.
Build Minutes Optimization and Cost Management
Bitbucket Pipelines bills based on build minutes. The free tier gives you 50 minutes per month — enough for a solo developer on a small project, but a team will burn through that in a day.
Here are concrete strategies to reduce build minutes:
Use caches aggressively. A cold npm ci on a medium-sized Node.js project takes 45-90 seconds. With the node cache, it drops to 5-10 seconds:
caches:
- node
Run steps in parallel. Three 2-minute steps running sequentially burn 6 minutes. In parallel, they burn 2 minutes (the longest step). You are billed for total elapsed time per step, so parallel execution roughly triples your efficiency.
Use size: 2x only when needed. The double-size option provides 8 GB RAM and 2x CPU but counts as 2 build minutes per minute. Use it for memory-intensive steps like large test suites or Webpack builds:
- step:
name: Heavy Build
size: 2x
script:
- npm ci
- npm run build
Skip CI on trivial commits. Add [skip ci] to your commit message to prevent a pipeline from running:
git commit -m "Update README [skip ci]"
Limit pipeline triggers. Use branch patterns to avoid running pipelines on branches that do not need them:
pipelines:
branches:
'{master,develop,release/*}':
- step:
script:
- npm ci
- npm test
Monitor your usage. Go to Workspace settings > Plan details to see your build minute consumption. Set up alerts before you hit your limit.
Typical build times for a Node.js Express project with ~500 tests:
| Step | Without Cache | With Cache |
|---|---|---|
| npm ci | 62s | 8s |
| npm test | 45s | 45s |
| Docker build | 120s | 35s |
| Deploy (ECS) | 90s | 90s |
| Total | ~5.3 min | ~3.0 min |
That 44% reduction saves roughly 700 build minutes per month on a team doing 10 pushes per day.
Complete Working Example
Here is a full bitbucket-pipelines.yml for a Node.js Express application with testing, Docker containerization, and deployment to AWS ECS with environment promotions:
image: node:20
options:
docker: true
max-time: 20
definitions:
caches:
npm-cache: ~/.npm
docker-cache: /tmp/docker-cache
services:
mongo:
image: mongo:7
variables:
MONGO_INITDB_ROOT_USERNAME: testuser
MONGO_INITDB_ROOT_PASSWORD: testpass
redis:
image: redis:7-alpine
steps:
- step: &test-step
name: Run Tests
caches:
- node
services:
- mongo
- redis
script:
- npm ci
- npm run lint
- npm test
artifacts:
- test-results/**
- step: &build-docker
name: Build Docker Image
caches:
- node
- docker
script:
- export IMAGE_TAG="${BITBUCKET_COMMIT:0:7}"
- export FULL_IMAGE="$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/my-node-app"
- docker build -t $FULL_IMAGE:$IMAGE_TAG -t $FULL_IMAGE:latest .
- pipe: atlassian/aws-ecr-push-image:2.0.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
IMAGE_NAME: "my-node-app"
TAGS: "${BITBUCKET_COMMIT:0:7} latest"
- step: &deploy-ecs
name: Deploy to ECS
script:
- export IMAGE_TAG="${BITBUCKET_COMMIT:0:7}"
- envsubst < task-definition.template.json > task-definition.json
- pipe: atlassian/aws-ecs-deploy:1.9.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
CLUSTER_NAME: $ECS_CLUSTER
SERVICE_NAME: $ECS_SERVICE
TASK_DEFINITION: "task-definition.json"
- pipe: atlassian/slack-notify:2.0.0
variables:
WEBHOOK_URL: $SLACK_WEBHOOK
MESSAGE: ":white_check_mark: Deployed ${BITBUCKET_COMMIT:0:7} to ${BITBUCKET_DEPLOYMENT_ENVIRONMENT}"
pipelines:
default:
- parallel:
- step:
name: Unit Tests
caches:
- node
script:
- npm ci
- npm run test:unit
- step:
name: Lint and Audit
caches:
- node
script:
- npm ci
- npm run lint
- npm audit --production --audit-level=high
pull-requests:
'**':
- parallel:
- step: *test-step
- step:
name: Security Scan
caches:
- node
script:
- npm ci
- npx snyk test --severity-threshold=high
branches:
develop:
- step: *test-step
- step: *build-docker
- step:
<<: *deploy-ecs
name: Deploy to Staging
deployment: staging
master:
- step: *test-step
- step: *build-docker
- step:
<<: *deploy-ecs
name: Deploy to Production
deployment: production
trigger: manual
tags:
'v*.*.*':
- step: *test-step
- step: *build-docker
- step:
name: Create Release
script:
- export VERSION=${BITBUCKET_TAG#v}
- echo "Creating release $VERSION"
- pipe: atlassian/slack-notify:2.0.0
variables:
WEBHOOK_URL: $SLACK_WEBHOOK
MESSAGE: ":package: Release $VERSION built and ready for deployment"
custom:
run-db-migration:
- step:
name: Run Database Migration
deployment: production
trigger: manual
script:
- npm ci
- node scripts/migrate.js up
rollback:
- step:
name: Rollback Production
deployment: production
trigger: manual
script:
- pipe: atlassian/aws-ecs-deploy:1.9.0
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
CLUSTER_NAME: $ECS_CLUSTER
SERVICE_NAME: $ECS_SERVICE
TASK_DEFINITION: "task-definition.json"
FORCE_NEW_DEPLOYMENT: "true"
And the corresponding ECS task definition template:
{
"family": "my-node-app",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "1024",
"containerDefinitions": [
{
"name": "app",
"image": "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/my-node-app:${IMAGE_TAG}",
"portMappings": [
{
"containerPort": 8080,
"protocol": "tcp"
}
],
"environment": [
{ "name": "NODE_ENV", "value": "production" },
{ "name": "PORT", "value": "8080" }
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/my-node-app",
"awslogs-region": "${AWS_DEFAULT_REGION}",
"awslogs-stream-prefix": "ecs"
}
},
"healthCheck": {
"command": ["CMD-SHELL", "wget --no-verbose --tries=1 --spider http://localhost:8080/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3
}
}
]
}
A simple health check endpoint in your Express app ties the whole thing together:
// routes/health.js
var express = require('express');
var router = express.Router();
var os = require('os');
router.get('/health', function(req, res) {
var healthcheck = {
status: 'ok',
uptime: process.uptime(),
timestamp: Date.now(),
hostname: os.hostname(),
version: process.env.npm_package_version || 'unknown',
commit: process.env.BITBUCKET_COMMIT || 'local'
};
try {
res.status(200).json(healthcheck);
} catch (error) {
healthcheck.status = 'error';
healthcheck.message = error.message;
res.status(503).json(healthcheck);
}
});
module.exports = router;
Common Issues and Troubleshooting
1. Out of Memory During npm ci
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed
- JavaScript heap out of memory
npm ERR! code ENOMEM
This happens on projects with large dependency trees or heavy build tools like Webpack. Fix it by using a 2x step or increasing Node's heap size:
- step:
name: Build
size: 2x
script:
- export NODE_OPTIONS="--max-old-space-size=4096"
- npm ci
- npm run build
2. Docker Daemon Not Available
Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
Is the docker daemon running?
You forgot to enable Docker in the options block. Add this at the top of your file:
options:
docker: true
Note that enabling Docker adds overhead to every step, even those that do not use Docker. If only some steps need Docker, keep options.docker off globally and use Docker service in specific steps instead.
3. Cache Integrity Error
Cache "node": Extracting cache failed: checksum mismatch
Proceeding without cache.
This happens when node_modules gets corrupted or the Node.js version changed between cached builds. The pipeline will continue without cache (just slower). To permanently fix it, clear the cache from the repository's Pipelines settings page or change the cache key:
definitions:
caches:
node-v2: node_modules # Bump the suffix to invalidate
4. Permission Denied on Scripts
bash: ./deploy.sh: Permission denied
Script files need to be executable in your repository. Fix this locally and commit:
chmod +x deploy.sh
git add deploy.sh
git commit -m "Fix script permissions"
git push
Alternatively, run scripts through their interpreter directly:
script:
- bash deploy.sh
- node scripts/migrate.js
5. Pipeline YAML Parsing Error
bitbucket-pipelines.yml has errors:
Configuration error: 'image' is not valid under any of the given schemas
YAML parsing errors are notoriously vague. Common causes include using tabs instead of spaces, incorrect indentation depth, or unquoted strings with special characters. Use the Bitbucket Pipeline validator at Repository Settings > Pipelines > Settings to check your YAML before committing. You can also validate locally:
npx @atlassian/bitbucket-pipelines-validator bitbucket-pipelines.yml
6. Service Container Connection Refused
MongoServerError: connect ECONNREFUSED 127.0.0.1:27017
Service containers take a few seconds to start. Add a wait loop at the beginning of your script:
- step:
services:
- mongo
script:
- |
echo "Waiting for MongoDB to start..."
for i in $(seq 1 30); do
if mongosh --eval "db.runCommand('ping')" > /dev/null 2>&1; then
echo "MongoDB is ready"
break
fi
sleep 1
done
- npm ci
- npm test
Best Practices
Pin your Docker image versions. Use
node:20.11.0instead ofnode:20. A surprise major update in the base image can break your pipeline silently. I have seennode:latestpull a new major version that brokeBufferAPIs mid-sprint.Use YAML anchors for DRY configuration. The
&anchorand*aliassyntax lets you define a step once and reuse it across multiple branches. This eliminates drift between your staging and production pipelines.Always use
npm ciovernpm install. Thecicommand deletesnode_modulesbefore installing, respects the lockfile exactly, and is faster. It was designed for CI environments and should be your default in every pipeline.Make production deploys manual. Add
trigger: manualto every production deployment step. Automated deploys to staging are fine, but production should always require a human decision. The Bitbucket UI makes manual triggers easy — one click and you get an audit trail.Keep secrets out of your YAML. Use repository or deployment variables for anything sensitive. Never hardcode API keys, passwords, or connection strings. Mark sensitive variables as "secured" in Bitbucket so they are masked in build logs.
Test your pipeline changes on feature branches. Add a temporary
branchesentry matching your feature branch to test pipeline modifications without affectingmasterordevelop. Remove it before merging.Set a
max-timeto prevent runaway builds. A forgottenwhile(true)loop or a hanging integration test can eat your entire monthly build minute allocation. Setmax-time: 20globally to kill builds after 20 minutes.Use artifacts sparingly. Artifacts are stored and transferred between steps, which adds time. Only pass what the next step actually needs — a
dist/folder, not the entirenode_modules.Monitor build trends. Bitbucket provides build duration metrics in the Pipelines dashboard. If your pipeline is getting slower over time, check your dependency tree growth and test suite expansion. A pipeline that creeps from 3 minutes to 12 minutes over six months is a code smell.
Version your pipes. Always pin pipe versions (e.g.,
atlassian/aws-s3-deploy:1.1.0) rather than usinglatest. This prevents unexpected breaking changes when Atlassian updates a pipe.