Integrating Automated Tests with Azure Pipelines
Run and report automated tests in Azure Pipelines with Jest, Mocha, code coverage publishing, and deployment gates
Integrating Automated Tests with Azure Pipelines
Automated testing in a CI/CD pipeline is not optional. If your tests are not running on every commit, gating every deployment, and producing visible reports that the entire team can inspect, you do not actually have a test suite — you have a suggestion. Azure Pipelines gives you first-class support for test execution, result publishing, coverage tracking, and quality gates, and this article walks through how to wire all of it up properly for Node.js projects using Jest and Mocha.
Prerequisites
Before diving in, make sure you have the following in place:
- An Azure DevOps organization and project with Pipelines enabled
- A Node.js project (v16 or later) with existing unit and/or integration tests
- Familiarity with YAML pipeline syntax (
azure-pipelines.yml) - Jest and/or Mocha installed as dev dependencies
- Basic understanding of JUnit XML test report format
- A service connection configured if deploying to external targets
Test Execution in Azure Pipelines
The foundation of pipeline testing is straightforward: install your dependencies, run your test command, and capture the output. But the details matter. Azure Pipelines does not magically understand your test results. You need to produce output in a format the platform can parse — JUnit XML — and then explicitly publish those results.
Here is a minimal pipeline that runs tests:
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool@0
inputs:
versionSpec: '20.x'
displayName: 'Install Node.js 20.x'
- script: npm ci
displayName: 'Install dependencies'
- script: npm test
displayName: 'Run tests'
This runs your tests but throws away the results. The pipeline passes or fails based on the exit code alone. No test counts, no individual test case visibility, no trend analysis. To get real value, you need to publish results.
Publishing Test Results (JUnit XML Format)
Azure DevOps understands several test result formats: JUnit, NUnit, VSTest, xUnit, and CTest. For Node.js, JUnit XML is the standard. The PublishTestResults task ingests XML files and surfaces them in the Tests tab of your pipeline run.
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/test-results.xml'
mergeTestResults: true
testRunTitle: 'Unit Tests'
condition: succeededOrFailed()
displayName: 'Publish test results'
Two critical details here. First, mergeTestResults: true combines results from multiple XML files into a single test run — essential when you split tests across files or run parallel jobs. Second, condition: succeededOrFailed() ensures results get published even when tests fail. Without this condition, a failing test step causes the pipeline to skip the publish step, and you lose visibility into exactly what failed.
Jest Configuration for Azure Pipelines
Jest does not produce JUnit XML by default. You need the jest-junit reporter. Install it:
npm install --save-dev jest-junit
Then configure Jest to use it. You can do this in jest.config.js:
var path = require('path');
module.exports = {
testEnvironment: 'node',
reporters: [
'default',
['jest-junit', {
outputDirectory: './test-results',
outputName: 'junit-results.xml',
classNameTemplate: '{classname}',
titleTemplate: '{title}',
ancestorSeparator: ' > ',
suiteNameTemplate: '{filepath}'
}]
],
collectCoverage: true,
coverageDirectory: './coverage',
coverageReporters: ['text', 'lcov', 'cobertura'],
testMatch: ['**/tests/unit/**/*.test.js'],
testTimeout: 30000
};
The classNameTemplate and titleTemplate options control how test names appear in the Azure DevOps test results UI. Using {filepath} for suiteNameTemplate gives you clear traceability back to source files. The cobertura coverage reporter is important — Azure DevOps can parse Cobertura XML for code coverage display.
You can also configure jest-junit through environment variables, which is useful when you want the same config to work locally (without XML output) and in CI (with XML output):
- script: npm test
displayName: 'Run Jest tests'
env:
JEST_JUNIT_OUTPUT_DIR: './test-results'
JEST_JUNIT_OUTPUT_NAME: 'jest-results.xml'
Mocha with Reporters
Mocha has a similar story. The built-in spec reporter is great for local development, but pipelines need mocha-junit-reporter:
npm install --save-dev mocha-junit-reporter
Configure it in .mocharc.yml:
spec: 'tests/integration/**/*.test.js'
timeout: 60000
reporter: mocha-junit-reporter
reporter-option:
- mochaFile=./test-results/mocha-results.xml
- rootSuiteTitle=Integration Tests
- toConsole=true
The toConsole: true option is important — without it, you lose console output in the pipeline logs. The reporter only writes XML, and your pipeline logs show nothing useful. Always keep console output enabled.
For running Mocha with multiple reporters (console output plus JUnit), use mocha-multi-reporters:
npm install --save-dev mocha-multi-reporters
Create a reporter config file, mocha-reporters.json:
{
"reporterEnabled": "spec, mocha-junit-reporter",
"mochaJunitReporterReporterOptions": {
"mochaFile": "./test-results/mocha-results.xml",
"rootSuiteTitle": "Integration Tests"
}
}
Then in .mocharc.yml:
reporter: mocha-multi-reporters
reporter-option:
- configFile=mocha-reporters.json
Code Coverage Publishing
Azure DevOps has a dedicated Code Coverage tab that renders coverage reports inline. To use it, you publish Cobertura XML for the summary data and an HTML report for drill-down browsing.
Jest with the cobertura reporter produces the XML. You need the PublishCodeCoverageResults task:
- task: PublishCodeCoverageResults@2
inputs:
summaryFileLocation: '$(System.DefaultWorkingDirectory)/coverage/cobertura-coverage.xml'
displayName: 'Publish code coverage'
condition: succeededOrFailed()
Version 2 of this task automatically picks up the HTML report if it exists alongside the summary file. For Jest, the lcov reporter generates an HTML report in the coverage/lcov-report directory, and the task finds it automatically.
For Mocha, use nyc (Istanbul's CLI) to instrument coverage:
{
"scripts": {
"test:integration": "nyc --reporter=cobertura --reporter=html --report-dir=./coverage-integration mocha"
}
}
This gives you both the Cobertura XML and an HTML report in one shot.
Test Stage Configuration
For real projects, you want tests in a dedicated stage that runs after build and before deployment. This gives you clear separation of concerns and the ability to gate deployments on test results.
stages:
- stage: Build
displayName: 'Build'
jobs:
- job: BuildJob
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool@0
inputs:
versionSpec: '20.x'
- script: npm ci
displayName: 'Install dependencies'
- script: npm run build
displayName: 'Build application'
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(System.DefaultWorkingDirectory)'
artifactName: 'app'
displayName: 'Publish build artifact'
- stage: Test
displayName: 'Test'
dependsOn: Build
jobs:
- job: UnitTests
displayName: 'Unit Tests'
pool:
vmImage: 'ubuntu-latest'
steps:
- task: DownloadPipelineArtifact@2
inputs:
artifactName: 'app'
targetPath: '$(System.DefaultWorkingDirectory)'
- script: npm ci
displayName: 'Install dependencies'
- script: npm run test:unit
displayName: 'Run unit tests'
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/test-results/jest-results.xml'
testRunTitle: 'Unit Tests'
condition: succeededOrFailed()
- task: PublishCodeCoverageResults@2
inputs:
summaryFileLocation: '$(System.DefaultWorkingDirectory)/coverage/cobertura-coverage.xml'
condition: succeededOrFailed()
- stage: Deploy
displayName: 'Deploy'
dependsOn: Test
condition: succeeded()
jobs:
- deployment: DeployJob
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying application"
displayName: 'Deploy'
The dependsOn and condition: succeeded() on the Deploy stage means it only runs if every job in the Test stage passes. This is your quality gate.
Parallel Test Execution
When your test suite grows beyond a few minutes, parallel execution becomes necessary. Azure Pipelines supports this through a job matrix or by splitting tests across multiple agents.
The matrix strategy runs the same job with different parameters:
- stage: Test
jobs:
- job: ParallelTests
strategy:
matrix:
UnitTests:
TEST_SUITE: 'unit'
TEST_COMMAND: 'npm run test:unit'
IntegrationTests:
TEST_SUITE: 'integration'
TEST_COMMAND: 'npm run test:integration'
E2ETests:
TEST_SUITE: 'e2e'
TEST_COMMAND: 'npm run test:e2e'
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool@0
inputs:
versionSpec: '20.x'
- script: npm ci
displayName: 'Install dependencies'
- script: $(TEST_COMMAND)
displayName: 'Run $(TEST_SUITE) tests'
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/test-results/*.xml'
testRunTitle: '$(TEST_SUITE) Tests'
mergeTestResults: true
condition: succeededOrFailed()
All three test suites run simultaneously on separate agents. The mergeTestResults flag consolidates everything into the pipeline's Test tab.
For splitting a single large test suite across agents, use Jest's --shard flag:
- job: ShardedTests
strategy:
matrix:
Shard1:
SHARD: '1/3'
Shard2:
SHARD: '2/3'
Shard3:
SHARD: '3/3'
steps:
- script: npx jest --shard=$(SHARD) --ci
displayName: 'Run test shard $(SHARD)'
Flaky Test Handling
Flaky tests are the silent killer of CI/CD confidence. Azure DevOps has built-in flaky test detection that you should enable, but you also need defensive strategies in your pipeline.
First, enable automatic flaky test detection in your Azure DevOps project settings under Test Management > Flaky Test Detection. The system analyzes test run history and automatically marks tests that intermittently pass and fail.
Second, configure retries in your test runner. For Jest:
// jest.config.js
module.exports = {
testEnvironment: 'node',
// Retry failed tests once before marking them as failed
retryTimes: 1,
reporters: [
'default',
['jest-junit', {
outputDirectory: './test-results',
outputName: 'junit-results.xml'
}]
]
};
For Mocha, use the --retries flag:
- script: npx mocha --retries 1 --reporter mocha-junit-reporter
displayName: 'Run integration tests with retry'
Third, quarantine persistently flaky tests. Create a separate test tag or directory for quarantined tests and run them in a non-blocking job:
- job: QuarantinedTests
displayName: 'Quarantined (Flaky) Tests'
continueOnError: true
steps:
- script: npx jest --testPathPattern='quarantine' --ci
displayName: 'Run quarantined tests'
The continueOnError: true means these failures do not block deployment, but results still get published for visibility.
Test Artifacts and Attachments
Beyond XML results, you often need to capture screenshots, logs, database dumps, or other artifacts from test runs. Use PublishPipelineArtifact to make these available after the run:
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(System.DefaultWorkingDirectory)/test-results'
artifactName: 'test-artifacts-$(System.JobAttempt)'
condition: succeededOrFailed()
displayName: 'Publish test artifacts'
For programmatic attachment of files to individual test results, you can use the Azure DevOps REST API from within your test teardown:
var fs = require('fs');
var path = require('path');
function saveTestArtifact(testName, data) {
var artifactDir = path.join(process.cwd(), 'test-results', 'artifacts');
if (!fs.existsSync(artifactDir)) {
fs.mkdirSync(artifactDir, { recursive: true });
}
var filename = testName.replace(/[^a-z0-9]/gi, '-').toLowerCase() + '.log';
var filepath = path.join(artifactDir, filename);
fs.writeFileSync(filepath, data, 'utf8');
return filepath;
}
// In your test teardown
afterEach(function () {
var testContext = this.currentTest || this;
if (testContext.state === 'failed') {
var logs = captureRelevantLogs();
saveTestArtifact(testContext.title, logs);
}
});
Conditional Test Stages
Not every commit needs every test. Running the full E2E suite on a documentation-only change wastes resources and time. Use path filters and conditions to run tests selectively:
stages:
- stage: UnitTest
displayName: 'Unit Tests'
condition: always()
jobs:
- job: RunUnitTests
steps:
- script: npm run test:unit
displayName: 'Run unit tests'
- stage: IntegrationTest
displayName: 'Integration Tests'
condition: |
and(
succeeded(),
or(
contains(variables['Build.SourceBranch'], 'refs/heads/master'),
contains(variables['Build.SourceBranch'], 'refs/heads/release'),
eq(variables['Build.Reason'], 'PullRequest')
)
)
jobs:
- job: RunIntegrationTests
steps:
- script: npm run test:integration
displayName: 'Run integration tests'
- stage: E2ETest
displayName: 'E2E Tests'
condition: |
and(
succeeded(),
or(
contains(variables['Build.SourceBranch'], 'refs/heads/master'),
contains(variables['Build.SourceBranch'], 'refs/heads/release')
)
)
jobs:
- job: RunE2ETests
steps:
- script: npm run test:e2e
displayName: 'Run E2E tests'
Unit tests run on every commit. Integration tests run on master, release branches, and pull requests. E2E tests only run on master and release. This dramatically reduces feedback time on feature branches.
Integration Test Databases in Pipelines
Integration tests that need a real database are a common pain point. Azure Pipelines supports Docker containers as services alongside your test agent, which makes spinning up a test database trivial:
- job: IntegrationTests
displayName: 'Integration Tests'
pool:
vmImage: 'ubuntu-latest'
services:
postgres:
image: postgres:16
ports:
- 5432:5432
env:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpassword
mongo:
image: mongo:7
ports:
- 27017:27017
steps:
- task: NodeTool@0
inputs:
versionSpec: '20.x'
- script: npm ci
displayName: 'Install dependencies'
- script: npm run db:migrate
displayName: 'Run database migrations'
env:
DATABASE_URL: 'postgresql://testuser:testpassword@localhost:5432/testdb'
- script: npm run test:integration
displayName: 'Run integration tests'
env:
DATABASE_URL: 'postgresql://testuser:testpassword@localhost:5432/testdb'
MONGO_URL: 'mongodb://localhost:27017/testdb'
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/test-results/mocha-results.xml'
testRunTitle: 'Integration Tests'
condition: succeededOrFailed()
The database containers start before your job steps execute and get torn down automatically after. Your tests get a clean, isolated database on every run with zero manual infrastructure management.
For database seeding, add a setup script that runs before tests:
var pg = require('pg');
var pool = new pg.Pool({
connectionString: process.env.DATABASE_URL
});
function seedTestData() {
return pool.query(
"INSERT INTO users (name, email) VALUES ($1, $2), ($3, $4)",
['Test User', '[email protected]', 'Admin User', '[email protected]']
);
}
function cleanTestData() {
return pool.query("TRUNCATE users, orders, products CASCADE");
}
module.exports = {
seedTestData: seedTestData,
cleanTestData: cleanTestData,
pool: pool
};
Test Result Trends and Analytics
Azure DevOps automatically tracks test result trends across pipeline runs. The Analytics tab shows pass rates over time, slowest tests, and failure patterns. To get the most value from this, keep your test run titles consistent — use the same testRunTitle for the same suite across runs so the system can correlate results.
You can also query the analytics programmatically using the Azure DevOps REST API or OData feed:
var https = require('https');
function getTestTrends(organization, project, pipelineId, days) {
var url = 'https://analytics.dev.azure.com/' + organization + '/' + project +
'/_odata/v4.0-preview/TestResultsDaily?' +
'$filter=Pipeline/PipelineName eq \'' + pipelineId + '\'' +
' and DateSK ge ' + getDateKey(days) +
'&$select=DateSK,ResultPassCount,ResultFailCount,ResultCount' +
'&$orderby=DateSK desc';
return new Promise(function (resolve, reject) {
https.get(url, {
headers: {
'Authorization': 'Basic ' + Buffer.from(':' + process.env.AZURE_PAT).toString('base64')
}
}, function (res) {
var data = '';
res.on('data', function (chunk) { data += chunk; });
res.on('end', function () { resolve(JSON.parse(data)); });
}).on('error', reject);
});
}
function getDateKey(daysAgo) {
var date = new Date();
date.setDate(date.getDate() - daysAgo);
return date.toISOString().split('T')[0].replace(/-/g, '');
}
This data powers custom dashboards and alerts. If your pass rate drops below a threshold, trigger a notification before the problem compounds.
Custom Test Report Dashboards
Azure DevOps dashboards support test-related widgets out of the box. Add the Test Results Trend widget to track pass/fail rates, and the Test Results Trend (Advanced) widget for filtering by test suite, pipeline, or branch.
For custom dashboards beyond what the built-in widgets offer, build a lightweight Node.js service that queries the analytics OData endpoint and renders the data:
var express = require('express');
var https = require('https');
var app = express();
app.get('/dashboard/test-health', function (req, res) {
var org = process.env.AZURE_ORG;
var project = process.env.AZURE_PROJECT;
var pat = process.env.AZURE_PAT;
var endpoints = [
'/TestResultsDaily?$filter=DateSK ge ' + getDateKey(30) +
'&$apply=aggregate(ResultPassCount with sum as TotalPass, ResultFailCount with sum as TotalFail)',
'/TestResultsDaily?$filter=DateSK ge ' + getDateKey(7) +
'&$select=DateSK,ResultPassCount,ResultFailCount&$orderby=DateSK desc'
];
Promise.all(endpoints.map(function (endpoint) {
return fetchOData(org, project, pat, endpoint);
})).then(function (results) {
var summary = results[0].value[0];
var passRate = (summary.TotalPass / (summary.TotalPass + summary.TotalFail) * 100).toFixed(1);
res.json({
passRate30d: passRate + '%',
totalTests30d: summary.TotalPass + summary.TotalFail,
dailyTrend: results[1].value
});
}).catch(function (err) {
res.status(500).json({ error: err.message });
});
});
function fetchOData(org, project, pat, endpoint) {
var base = 'https://analytics.dev.azure.com/' + org + '/' + project + '/_odata/v4.0-preview';
return new Promise(function (resolve, reject) {
https.get(base + endpoint, {
headers: {
'Authorization': 'Basic ' + Buffer.from(':' + pat).toString('base64')
}
}, function (res) {
var data = '';
res.on('data', function (chunk) { data += chunk; });
res.on('end', function () { resolve(JSON.parse(data)); });
}).on('error', reject);
});
}
function getDateKey(daysAgo) {
var date = new Date();
date.setDate(date.getDate() - daysAgo);
return date.toISOString().split('T')[0].replace(/-/g, '');
}
app.listen(3000);
Complete Working Example
Here is a production-ready Azure Pipeline that runs Jest unit tests, Mocha integration tests, publishes coverage, and gates deployment:
trigger:
branches:
include:
- master
- release/*
pr:
branches:
include:
- master
pool:
vmImage: 'ubuntu-latest'
variables:
NODE_VERSION: '20.x'
npm_config_cache: $(Pipeline.Workspace)/.npm
stages:
# ─── BUILD STAGE ───────────────────────────────────────────────
- stage: Build
displayName: 'Build & Lint'
jobs:
- job: BuildJob
displayName: 'Build Application'
steps:
- task: NodeTool@0
inputs:
versionSpec: $(NODE_VERSION)
displayName: 'Use Node.js $(NODE_VERSION)'
- task: Cache@2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
path: $(npm_config_cache)
displayName: 'Cache npm packages'
- script: npm ci
displayName: 'Install dependencies'
- script: npm run lint
displayName: 'Run linter'
- script: npm run build
displayName: 'Build application'
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(System.DefaultWorkingDirectory)'
artifactName: 'app-build'
displayName: 'Publish build artifact'
# ─── UNIT TEST STAGE ───────────────────────────────────────────
- stage: UnitTests
displayName: 'Unit Tests'
dependsOn: Build
jobs:
- job: JestUnitTests
displayName: 'Jest Unit Tests'
steps:
- task: NodeTool@0
inputs:
versionSpec: $(NODE_VERSION)
- task: Cache@2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
path: $(npm_config_cache)
- script: npm ci
displayName: 'Install dependencies'
- script: npx jest --config jest.unit.config.js --ci --forceExit
displayName: 'Run Jest unit tests'
env:
JEST_JUNIT_OUTPUT_DIR: './test-results'
JEST_JUNIT_OUTPUT_NAME: 'jest-unit-results.xml'
NODE_ENV: 'test'
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/test-results/jest-unit-results.xml'
mergeTestResults: true
testRunTitle: 'Unit Tests - Jest'
condition: succeededOrFailed()
displayName: 'Publish unit test results'
- task: PublishCodeCoverageResults@2
inputs:
summaryFileLocation: '$(System.DefaultWorkingDirectory)/coverage/cobertura-coverage.xml'
condition: succeededOrFailed()
displayName: 'Publish unit test coverage'
# ─── INTEGRATION TEST STAGE ────────────────────────────────────
- stage: IntegrationTests
displayName: 'Integration Tests'
dependsOn: Build
jobs:
- job: MochaIntegrationTests
displayName: 'Mocha Integration Tests'
services:
postgres:
image: postgres:16
ports:
- 5432:5432
env:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpassword
redis:
image: redis:7
ports:
- 6379:6379
steps:
- task: NodeTool@0
inputs:
versionSpec: $(NODE_VERSION)
- task: Cache@2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
path: $(npm_config_cache)
- script: npm ci
displayName: 'Install dependencies'
- script: |
npm run db:migrate
npm run db:seed:test
displayName: 'Setup test database'
env:
DATABASE_URL: 'postgresql://testuser:testpassword@localhost:5432/testdb'
- script: |
npx nyc --reporter=cobertura --reporter=html --report-dir=./coverage-integration \
npx mocha --config .mocharc.integration.yml --exit
displayName: 'Run Mocha integration tests'
env:
DATABASE_URL: 'postgresql://testuser:testpassword@localhost:5432/testdb'
REDIS_URL: 'redis://localhost:6379'
NODE_ENV: 'test'
- task: PublishTestResults@2
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '**/test-results/mocha-integration-results.xml'
mergeTestResults: true
testRunTitle: 'Integration Tests - Mocha'
condition: succeededOrFailed()
displayName: 'Publish integration test results'
- task: PublishCodeCoverageResults@2
inputs:
summaryFileLocation: '$(System.DefaultWorkingDirectory)/coverage-integration/cobertura-coverage.xml'
condition: succeededOrFailed()
displayName: 'Publish integration test coverage'
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(System.DefaultWorkingDirectory)/test-results'
artifactName: 'integration-test-artifacts'
condition: succeededOrFailed()
displayName: 'Publish test artifacts'
# ─── DEPLOY STAGE (GATED) ─────────────────────────────────────
- stage: Deploy
displayName: 'Deploy to Production'
dependsOn:
- UnitTests
- IntegrationTests
condition: |
and(
succeeded('UnitTests'),
succeeded('IntegrationTests'),
eq(variables['Build.SourceBranch'], 'refs/heads/master')
)
jobs:
- deployment: DeployProduction
displayName: 'Deploy to Production'
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: DownloadPipelineArtifact@2
inputs:
artifactName: 'app-build'
targetPath: '$(System.DefaultWorkingDirectory)'
- script: |
echo "Deploying to production..."
npm run deploy:production
displayName: 'Deploy application'
The corresponding Jest configuration file for unit tests:
// jest.unit.config.js
var path = require('path');
module.exports = {
testEnvironment: 'node',
roots: ['<rootDir>/tests/unit'],
testMatch: ['**/*.test.js'],
reporters: [
'default',
['jest-junit', {
outputDirectory: './test-results',
outputName: 'jest-unit-results.xml',
classNameTemplate: '{classname}',
titleTemplate: '{title}',
ancestorSeparator: ' > ',
suiteNameTemplate: '{filepath}'
}]
],
collectCoverage: true,
coverageDirectory: './coverage',
coverageReporters: ['text', 'text-summary', 'lcov', 'cobertura'],
coverageThreshold: {
global: {
branches: 80,
functions: 80,
lines: 80,
statements: 80
}
},
testTimeout: 15000,
forceExit: true,
detectOpenHandles: true
};
And the Mocha configuration for integration tests:
# .mocharc.integration.yml
spec: 'tests/integration/**/*.test.js'
timeout: 60000
retries: 1
reporter: mocha-multi-reporters
reporter-option:
- configFile=mocha-reporters.json
exit: true
With the multi-reporter config:
{
"reporterEnabled": "spec, mocha-junit-reporter",
"mochaJunitReporterReporterOptions": {
"mochaFile": "./test-results/mocha-integration-results.xml",
"rootSuiteTitle": "Integration Tests"
}
}
This setup gives you parallel test execution (unit and integration run simultaneously), database-backed integration tests, code coverage for both suites, and a deployment gate that requires both suites to pass before shipping to production.
Common Issues and Troubleshooting
Test results not appearing in the Tests tab. The most common cause is the PublishTestResults task getting skipped because it runs after a failed test step. Add condition: succeededOrFailed() to the publish task. The second most common cause is the wrong path pattern in testResultsFiles — use **/ prefix to search recursively.
Coverage report shows 0% or is missing files. Jest's coverage only instruments files that are imported during test execution by default. If you have files that no test imports, they will not appear in coverage. Add collectCoverageFrom to your Jest config to explicitly include all source files:
module.exports = {
collectCoverageFrom: [
'src/**/*.js',
'!src/**/*.test.js',
'!src/**/index.js'
]
};
Docker service containers fail to start. On Microsoft-hosted agents, container services require the ubuntu-latest image — Windows and macOS agents do not support service containers. Also verify that the container image tag exists and is accessible. Use explicit version tags (e.g., postgres:16) instead of latest to avoid unexpected breaking changes.
Tests pass locally but fail in the pipeline. This is almost always an environment difference. Common culprits: missing environment variables (set them in the pipeline YAML, not just in your .env file), different Node.js versions (pin with NodeTool@0), timing-sensitive tests that fail under CI load (increase timeouts for integration tests), and file path case sensitivity (Linux agents are case-sensitive, your Mac or Windows machine is not).
Flaky tests poisoning your pipeline reliability. If a test fails intermittently, first check for shared state between tests — database rows that leak, global variables, or uncleared mocks. If you cannot fix the root cause immediately, use retryTimes in Jest or --retries in Mocha as a short-term bandage, quarantine the test, and file a ticket to investigate.
Pipeline runs slowly due to dependency installation. Use the Cache@2 task to cache node_modules or the npm cache directory between runs. Keying on package-lock.json ensures the cache invalidates when dependencies change. This alone can save two to three minutes per pipeline run on a typical Node.js project.
Best Practices
Always publish test results with
condition: succeededOrFailed(). If you only publish on success, you lose visibility into failures — the exact moment you need the most information.Pin your Node.js version in the pipeline. Use
NodeTool@0with an explicitversionSpec. Do not rely on whatever version is pre-installed on the agent image. Agent images update regularly and your tests should not break because of an unexpected Node.js upgrade.Keep test run titles consistent across runs. Azure DevOps tracks trends by test run title. If you change the title string, you lose historical correlation. Pick a naming convention and stick with it.
Set coverage thresholds and enforce them in the pipeline. Jest's
coverageThresholdconfiguration fails the test run if coverage drops below your target. This prevents coverage erosion over time without requiring manual review of coverage numbers.Use
npm ciinstead ofnpm installin pipelines. It is faster, it respects the lockfile exactly, and it avoids the accidental dependency drift thatnpm installcan cause. This is non-negotiable for reproducible builds.Separate unit tests from integration tests. Run them in parallel stages. Unit tests should complete in under a minute and never touch external services. Integration tests can take longer and use real databases via service containers. Mixing them slows down your feedback loop.
Cache npm dependencies between runs. The
Cache@2task with a key based onpackage-lock.jsoneliminates redundant downloads. On projects with heavy dependency trees, this saves significant time and reduces network-related flakiness.Gate deployments on test results, not just exit codes. Use stage dependencies with
condition: succeeded()to ensure deployment only proceeds when all test stages pass. Combine this with environment approvals for an additional layer of safety.Quarantine flaky tests instead of ignoring failures. A flaky test running in a non-blocking job with
continueOnError: truestill produces visible results. Ignoring it entirely trains your team to distrust the pipeline. Quarantine, track, and fix.Store test artifacts for post-mortem analysis. Logs, screenshots, database state dumps — anything that helps diagnose a failure should be captured as a pipeline artifact. The few seconds of upload time pays for itself the first time you debug a mysterious CI-only failure.