GitLab CI/CD Pipeline Configuration
A practical guide to GitLab CI/CD pipelines including stages, jobs, caching, artifacts, environments, Docker integration, and deployment automation.
GitLab CI/CD Pipeline Configuration
GitLab CI/CD is built into every GitLab repository. No external service, no marketplace, no separate configuration — drop a .gitlab-ci.yml file in your repo and pipelines run automatically. The tight integration means pipelines can interact with merge requests, environments, container registries, and deployment tracking without plugins.
I have configured GitLab pipelines for projects ranging from simple libraries to complex microservice deployments. The pipeline configurations in this guide are production-tested patterns.
Prerequisites
- A GitLab repository (GitLab.com or self-hosted)
- Basic YAML knowledge
- Understanding of CI/CD concepts
- A Node.js project (for examples)
Pipeline Basics
# .gitlab-ci.yml
stages:
- install
- test
- build
- deploy
variables:
NODE_VERSION: "20"
npm_config_cache: "$CI_PROJECT_DIR/.npm"
install:
stage: install
image: node:${NODE_VERSION}
script:
- npm ci
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
- .npm
test:
stage: test
image: node:${NODE_VERSION}
script:
- npm test
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull
build:
stage: build
image: node:${NODE_VERSION}
script:
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 week
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull
Stages and Job Flow
stages:
- lint
- test
- build
- deploy-staging
- deploy-production
# Jobs in the same stage run in parallel
# Jobs in the next stage wait for all previous stage jobs to pass
lint:eslint:
stage: lint
script: npx eslint src/
lint:prettier:
stage: lint
script: npx prettier --check src/
test:unit:
stage: test
script: npx jest --ci
test:integration:
stage: test
script: npm run test:integration
build:
stage: build
script: npm run build
Job Rules
deploy-staging:
stage: deploy-staging
script: ./deploy.sh staging
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: on_success
- when: never
deploy-production:
stage: deploy-production
script: ./deploy.sh production
rules:
- if: $CI_COMMIT_TAG =~ /^v\d+\.\d+\.\d+$/
when: manual # Requires manual click
- when: never
# Run on merge requests only
test:mr:
stage: test
script: npm test
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
# Skip for documentation changes
test:
stage: test
script: npm test
rules:
- changes:
- "**/*.md"
- docs/**
when: never
- when: on_success
Caching
Cache Configuration
# Global cache for all jobs
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
- .npm
# Per-job cache override
test:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull # Only read, don't write
install:
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
- .npm
policy: push # Only write, don't read (fresh install)
Cache Key Strategies
# Per-branch cache
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
# Lock file based (shared across branches)
cache:
key:
files:
- package-lock.json
paths:
- node_modules/
# Combined: branch + lock file
cache:
key: "${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHORT_SHA}"
paths:
- node_modules/
fallback_keys:
- ${CI_COMMIT_REF_SLUG}
- main
Artifacts
build:
stage: build
script:
- npm run build
- npm run test -- --coverage
artifacts:
paths:
- dist/
- coverage/
reports:
junit: test-results.xml
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
expire_in: 1 week
when: always # Keep artifacts even if job fails
# Use artifacts from a previous stage
deploy:
stage: deploy
dependencies:
- build # Download build artifacts
script:
- ls dist/
- ./deploy.sh
Coverage Reporting
test:
stage: test
script:
- npm test -- --coverage --coverageReporters=text-summary
coverage: '/Statements\s*:\s*(\d+\.?\d*)%/'
artifacts:
reports:
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
The coverage regex extracts coverage percentage from the job output and displays it in merge requests.
Docker Integration
Building Docker Images
build-image:
stage: build
image: docker:24
services:
- docker:24-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA .
- docker build -t $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
rules:
- if: $CI_COMMIT_BRANCH == "main"
Using Kaniko (No Docker-in-Docker)
build-image:
stage: build
image:
name: gcr.io/kaniko-project/executor:v1.9.0-debug
entrypoint: [""]
script:
- /kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHORT_SHA}"
--destination "${CI_REGISTRY_IMAGE}:latest"
--cache=true
rules:
- if: $CI_COMMIT_BRANCH == "main"
Environments
deploy-staging:
stage: deploy-staging
script:
- npm run deploy -- --env staging
environment:
name: staging
url: https://staging.myapp.com
on_stop: stop-staging
rules:
- if: $CI_COMMIT_BRANCH == "main"
stop-staging:
stage: deploy-staging
script:
- npm run teardown -- --env staging
environment:
name: staging
action: stop
rules:
- if: $CI_COMMIT_BRANCH == "main"
when: manual
deploy-production:
stage: deploy-production
script:
- npm run deploy -- --env production
environment:
name: production
url: https://myapp.com
rules:
- if: $CI_COMMIT_TAG =~ /^v\d+\.\d+\.\d+$/
when: manual
allow_failure: false
# Dynamic environments for merge requests
deploy-review:
stage: deploy-staging
script:
- npm run deploy -- --env review-$CI_MERGE_REQUEST_IID
environment:
name: review/$CI_MERGE_REQUEST_IID
url: https://review-$CI_MERGE_REQUEST_IID.myapp.com
on_stop: stop-review
auto_stop_in: 1 week
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
stop-review:
stage: deploy-staging
script:
- npm run teardown -- --env review-$CI_MERGE_REQUEST_IID
environment:
name: review/$CI_MERGE_REQUEST_IID
action: stop
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
when: manual
Services (Sidecar Containers)
test:integration:
stage: test
image: node:20
services:
- name: postgres:15
alias: db
variables:
POSTGRES_DB: test_db
POSTGRES_USER: test_user
POSTGRES_PASSWORD: test_pass
- name: redis:7-alpine
alias: cache
variables:
DATABASE_URL: "postgres://test_user:test_pass@db:5432/test_db"
REDIS_URL: "redis://cache:6379"
script:
- npm ci
- npm run test:integration
Templates and Includes
Reusing Configuration
# templates/node-test.yml (in a shared repo)
.node-test:
image: node:20
cache:
key:
files:
- package-lock.json
paths:
- node_modules/
before_script:
- npm ci
# .gitlab-ci.yml
include:
- project: 'myorg/ci-templates'
ref: main
file: '/templates/node-test.yml'
- local: '.gitlab/ci/deploy.yml'
- template: 'Jobs/SAST.gitlab-ci.yml'
test:
extends: .node-test
stage: test
script:
- npm test
YAML Anchors
.default-cache: &default-cache
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
.node-setup: &node-setup
image: node:20
before_script:
- npm ci
test:
<<: *node-setup
<<: *default-cache
stage: test
script:
- npm test
lint:
<<: *node-setup
<<: *default-cache
stage: lint
script:
- npm run lint
Complete Working Example: Full Production Pipeline
# .gitlab-ci.yml
stages:
- install
- quality
- test
- build
- deploy-staging
- deploy-production
variables:
NODE_VERSION: "20"
npm_config_cache: "$CI_PROJECT_DIR/.npm"
# === Shared Configuration ===
.node-job:
image: node:${NODE_VERSION}
cache:
key:
files:
- package-lock.json
paths:
- node_modules/
- .npm
policy: pull
# === Install Stage ===
install:
extends: .node-job
stage: install
script:
- npm ci
cache:
key:
files:
- package-lock.json
paths:
- node_modules/
- .npm
policy: pull-push
# === Quality Stage ===
lint:
extends: .node-job
stage: quality
script:
- npx eslint src/ --format compact
allow_failure: false
prettier:
extends: .node-job
stage: quality
script:
- npx prettier --check "src/**/*.{js,json,css}"
audit:
extends: .node-job
stage: quality
script:
- npm audit --production --audit-level=high
allow_failure: true
# === Test Stage ===
test:unit:
extends: .node-job
stage: test
script:
- npx jest --ci --coverage --coverageReporters=text-summary --coverageReporters=cobertura
coverage: '/Statements\s*:\s*(\d+\.?\d*)%/'
artifacts:
reports:
junit: junit.xml
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
when: always
test:integration:
extends: .node-job
stage: test
services:
- name: postgres:15
alias: db
variables:
POSTGRES_DB: test
POSTGRES_USER: test
POSTGRES_PASSWORD: test
variables:
DATABASE_URL: "postgres://test:test@db:5432/test"
script:
- npm run test:integration
# === Build Stage ===
build:
extends: .node-job
stage: build
script:
- npm run build
- echo "BUILD_VERSION=$(node -p 'require(\"./package.json\").version')" >> build.env
artifacts:
paths:
- dist/
reports:
dotenv: build.env
expire_in: 1 week
rules:
- if: $CI_COMMIT_BRANCH == "main"
- if: $CI_COMMIT_TAG
# === Deploy Staging ===
deploy:staging:
stage: deploy-staging
image: alpine:latest
dependencies:
- build
script:
- echo "Deploying v${BUILD_VERSION} to staging"
- apk add --no-cache curl
- |
curl -X POST "${STAGING_DEPLOY_URL}" \
-H "Authorization: Bearer ${STAGING_DEPLOY_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"version\": \"${BUILD_VERSION}\"}"
environment:
name: staging
url: https://staging.myapp.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
# === Deploy Production ===
deploy:production:
stage: deploy-production
image: alpine:latest
dependencies:
- build
script:
- echo "Deploying v${BUILD_VERSION} to production"
- apk add --no-cache curl
- |
curl -X POST "${PRODUCTION_DEPLOY_URL}" \
-H "Authorization: Bearer ${PRODUCTION_DEPLOY_TOKEN}" \
-H "Content-Type: application/json" \
-d "{\"version\": \"${BUILD_VERSION}\"}"
environment:
name: production
url: https://myapp.com
rules:
- if: $CI_COMMIT_TAG =~ /^v\d+\.\d+\.\d+$/
when: manual
allow_failure: false
Common Issues and Troubleshooting
Pipeline does not run on merge request
The rules or triggers do not include merge request events:
Fix: Use rules: - if: $CI_PIPELINE_SOURCE == "merge_request_event" instead of only: merge_requests (deprecated). Be careful about duplicate pipelines — push and merge_request_event can trigger the same pipeline.
Cache is not shared between jobs
Each job starts fresh because cache keys differ or cache policy is wrong:
Fix: Use the same cache key across jobs. Set policy: pull-push on the job that creates the cache and policy: pull on jobs that only read it. Use key: files: [package-lock.json] for consistent keys.
Docker-in-Docker builds fail with permission errors
The DinD service is not configured correctly or TLS is not set up:
Fix: Use services: [docker:24-dind] and set DOCKER_TLS_CERTDIR: "/certs". Ensure the runner supports Docker executor or has Docker installed. Consider Kaniko as a simpler alternative.
Pipeline runs twice on merge request
Both push and merge_request events trigger separate pipelines:
Fix: Use workflow: rules: to control when pipelines run globally, or use rules: on each job to prevent duplicates. A common pattern is to run on merge requests and on push to main only.
Best Practices
- Use
extendsand templates to reduce duplication. Shared configuration via.node-jobtemplate makes pipelines maintainable. Changes propagate to all jobs. - Cache
node_moduleswith lock file keys.key: files: [package-lock.json]ensures cache invalidation when dependencies change but reuse when they do not. - Use
rules:instead ofonly:/except:. Therules:syntax is more powerful and theonly:/except:keywords are deprecated. - Set
allow_failureexplicitly. Non-critical checks like audit should useallow_failure: true. Critical checks should useallow_failure: false. - Use environments for deployment tracking. GitLab tracks which commit is deployed to each environment and provides rollback buttons.
- Pin Docker image versions.
node:20today is different fromnode:20next month. Use specific tags likenode:20.11-bullseyefor reproducible builds. - Set artifact expiration.
expire_in: 1 weekprevents storage costs from growing unbounded. Keep production artifacts longer if needed for rollbacks.