API Testing Integration with Azure DevOps
Integrate API testing into Azure DevOps with supertest, Newman, contract validation, and automated result publishing
API Testing Integration with Azure DevOps
API testing is the backbone of any reliable service-oriented architecture, and running those tests manually is a recipe for broken deployments. Azure DevOps gives you a mature pipeline system that can orchestrate every layer of API testing, from fast unit tests against your Express handlers to full end-to-end contract validation against live environments. This article walks through how to wire up supertest, Newman, contract testing, and more into Azure Pipelines so your APIs never ship broken.
Prerequisites
Before diving in, you should have the following in place:
- An Azure DevOps organization and project with Pipelines enabled
- A Node.js API project (Express.js preferred for the examples here)
- Basic familiarity with YAML pipeline syntax
- Node.js v18+ installed locally for development
- npm or yarn for package management
- A Postman account if you plan to use Newman collections
API Testing Approaches
Not all API tests are created equal. You need a layered strategy that catches different classes of bugs at different speeds. Here is how I think about the testing pyramid for APIs.
Unit Tests
Unit tests exercise individual route handlers or middleware in isolation. They are fast, cheap, and should make up the bulk of your test suite. With supertest, you can test Express route logic without spinning up a real server or hitting a real database.
Integration Tests
Integration tests verify that your API works correctly when connected to real dependencies like databases, caches, or third-party services. These are slower but catch wiring issues that unit tests miss.
Contract Tests
Contract tests validate that your API conforms to a published specification, typically an OpenAPI/Swagger document. They catch breaking changes before they reach consumers.
End-to-End Tests
E2E tests hit a deployed instance of your API with real HTTP requests. They are the slowest and most brittle, but they verify the full stack works as expected in a given environment.
Supertest for Express.js APIs
Supertest is my go-to library for testing Express APIs in Node.js. It wraps your Express app and makes HTTP assertions without needing a running server.
Install the dependencies:
npm install --save-dev supertest mocha chai
Here is a basic Express app to test against:
// app.js
var express = require("express");
var app = express();
app.use(express.json());
var items = [
{ id: 1, name: "Widget", price: 9.99 },
{ id: 2, name: "Gadget", price: 24.99 }
];
app.get("/api/items", function (req, res) {
var category = req.query.category;
if (category) {
var filtered = items.filter(function (item) {
return item.category === category;
});
return res.json(filtered);
}
res.json(items);
});
app.get("/api/items/:id", function (req, res) {
var id = parseInt(req.params.id);
var item = items.find(function (i) {
return i.id === id;
});
if (!item) {
return res.status(404).json({ error: "Item not found" });
}
res.json(item);
});
app.post("/api/items", function (req, res) {
var body = req.body;
if (!body.name || !body.price) {
return res.status(400).json({ error: "Name and price are required" });
}
var newItem = {
id: items.length + 1,
name: body.name,
price: body.price
};
items.push(newItem);
res.status(201).json(newItem);
});
module.exports = app;
Now write supertest tests with Mocha and Chai:
// test/api.test.js
var request = require("supertest");
var expect = require("chai").expect;
var app = require("../app");
describe("Items API", function () {
describe("GET /api/items", function () {
it("should return all items", function (done) {
request(app)
.get("/api/items")
.expect("Content-Type", /json/)
.expect(200)
.end(function (err, res) {
if (err) return done(err);
expect(res.body).to.be.an("array");
expect(res.body.length).to.be.at.least(1);
done();
});
});
it("should return items with correct shape", function (done) {
request(app)
.get("/api/items")
.expect(200)
.end(function (err, res) {
if (err) return done(err);
var item = res.body[0];
expect(item).to.have.property("id");
expect(item).to.have.property("name");
expect(item).to.have.property("price");
expect(item.price).to.be.a("number");
done();
});
});
});
describe("GET /api/items/:id", function () {
it("should return a single item by id", function (done) {
request(app)
.get("/api/items/1")
.expect(200)
.end(function (err, res) {
if (err) return done(err);
expect(res.body.id).to.equal(1);
expect(res.body.name).to.equal("Widget");
done();
});
});
it("should return 404 for missing item", function (done) {
request(app)
.get("/api/items/999")
.expect(404)
.end(function (err, res) {
if (err) return done(err);
expect(res.body.error).to.equal("Item not found");
done();
});
});
});
describe("POST /api/items", function () {
it("should create a new item", function (done) {
request(app)
.post("/api/items")
.send({ name: "Doohickey", price: 14.99 })
.expect(201)
.end(function (err, res) {
if (err) return done(err);
expect(res.body.name).to.equal("Doohickey");
expect(res.body).to.have.property("id");
done();
});
});
it("should reject items without required fields", function (done) {
request(app)
.post("/api/items")
.send({ name: "Incomplete" })
.expect(400)
.end(function (err, res) {
if (err) return done(err);
expect(res.body.error).to.include("required");
done();
});
});
});
});
Configure Mocha to output JUnit XML for Azure DevOps:
{
"scripts": {
"test": "mocha --reporter mocha-junit-reporter --reporter-options mochaFile=./test-results/unit-tests.xml test/**/*.test.js"
}
}
Install the JUnit reporter:
npm install --save-dev mocha-junit-reporter
Newman (Postman CLI) in Azure Pipelines
Newman lets you run Postman collections from the command line, which makes them perfect for CI/CD pipelines. If your team already uses Postman for manual API testing, Newman lets you reuse those collections in automation.
Export your Postman collection as a JSON file and commit it to your repository:
tests/
postman/
api-collection.json
staging-environment.json
production-environment.json
Install Newman globally or as a dev dependency:
npm install --save-dev newman newman-reporter-junitfull
Run Newman with JUnit output:
// scripts/run-newman.js
var newman = require("newman");
var path = require("path");
var environment = process.env.TEST_ENVIRONMENT || "staging";
var envFile = path.join(
__dirname,
"../tests/postman/" + environment + "-environment.json"
);
newman.run(
{
collection: path.join(__dirname, "../tests/postman/api-collection.json"),
environment: envFile,
reporters: ["cli", "junitfull"],
reporter: {
junitfull: {
export: "./test-results/newman-results.xml"
}
},
timeoutRequest: 10000,
delayRequest: 100
},
function (err, summary) {
if (err) {
console.error("Newman run failed:", err.message);
process.exit(1);
}
var failures = summary.run.failures.length;
console.log("Total requests:", summary.run.stats.requests.total);
console.log("Failures:", failures);
if (failures > 0) {
process.exit(1);
}
}
);
REST API Contract Testing
Contract testing ensures your API implementation matches its specification. I use a combination of OpenAPI specs and runtime validation for this.
Here is a contract test that validates responses against an OpenAPI schema:
// test/contract.test.js
var request = require("supertest");
var expect = require("chai").expect;
var fs = require("fs");
var yaml = require("js-yaml");
var Ajv = require("ajv");
var app = require("../app");
var ajv = new Ajv({ allErrors: true });
var spec = yaml.load(fs.readFileSync("./openapi.yaml", "utf8"));
function getSchema(path, method, statusCode) {
var pathSpec = spec.paths[path];
if (!pathSpec || !pathSpec[method]) return null;
var response = pathSpec[method].responses[String(statusCode)];
if (!response || !response.content) return null;
return response.content["application/json"].schema;
}
describe("API Contract Tests", function () {
it("GET /api/items should match OpenAPI schema", function (done) {
var schema = getSchema("/api/items", "get", 200);
var validate = ajv.compile(schema);
request(app)
.get("/api/items")
.expect(200)
.end(function (err, res) {
if (err) return done(err);
var valid = validate(res.body);
if (!valid) {
console.error("Schema errors:", validate.errors);
}
expect(valid).to.be.true;
done();
});
});
it("GET /api/items/:id 404 should match error schema", function (done) {
var schema = getSchema("/api/items/{id}", "get", 404);
var validate = ajv.compile(schema);
request(app)
.get("/api/items/999")
.expect(404)
.end(function (err, res) {
if (err) return done(err);
var valid = validate(res.body);
expect(valid).to.be.true;
done();
});
});
it("POST /api/items should match creation schema", function (done) {
var schema = getSchema("/api/items", "post", 201);
var validate = ajv.compile(schema);
request(app)
.post("/api/items")
.send({ name: "Contract Test Item", price: 5.0 })
.expect(201)
.end(function (err, res) {
if (err) return done(err);
var valid = validate(res.body);
expect(valid).to.be.true;
done();
});
});
});
Publishing API Test Results
Azure DevOps has a built-in test results viewer that aggregates results from JUnit XML files. The PublishTestResults task handles this. Here is the key pipeline snippet:
- task: PublishTestResults@2
displayName: "Publish Unit Test Results"
inputs:
testResultsFormat: "JUnit"
testResultsFiles: "**/unit-tests.xml"
searchFolder: "$(System.DefaultWorkingDirectory)/test-results"
testRunTitle: "API Unit Tests"
mergeTestResults: true
condition: always()
- task: PublishTestResults@2
displayName: "Publish Newman Test Results"
inputs:
testResultsFormat: "JUnit"
testResultsFiles: "**/newman-results.xml"
searchFolder: "$(System.DefaultWorkingDirectory)/test-results"
testRunTitle: "API Integration Tests (Newman)"
mergeTestResults: true
condition: always()
The condition: always() is critical. Without it, if a test step fails, the publish step gets skipped and you never see the results in Azure DevOps. You always want test results published regardless of pass or fail.
API Test Collections Management
Managing test collections across teams requires some discipline. Here is my recommended approach:
Store collections in your repository alongside the code they test. This keeps tests versioned with the API.
project-root/
src/
tests/
unit/
api.test.js
postman/
collections/
items-api.json
auth-api.json
admin-api.json
environments/
local.json
staging.json
production.json
globals.json
Write a helper script that runs all collections in sequence:
// scripts/run-all-collections.js
var newman = require("newman");
var fs = require("fs");
var path = require("path");
var collectionsDir = path.join(__dirname, "../tests/postman/collections");
var envFile = path.join(
__dirname,
"../tests/postman/environments/" + (process.env.TEST_ENV || "staging") + ".json"
);
var collections = fs.readdirSync(collectionsDir).filter(function (f) {
return f.endsWith(".json");
});
var results = [];
function runNext(index) {
if (index >= collections.length) {
var totalFailures = results.reduce(function (sum, r) {
return sum + r.failures;
}, 0);
console.log("\n--- Summary ---");
results.forEach(function (r) {
console.log(r.name + ": " + r.failures + " failures");
});
process.exit(totalFailures > 0 ? 1 : 0);
return;
}
var collectionFile = path.join(collectionsDir, collections[index]);
var collectionName = collections[index].replace(".json", "");
console.log("\nRunning collection: " + collectionName);
newman.run(
{
collection: collectionFile,
environment: envFile,
reporters: ["cli", "junitfull"],
reporter: {
junitfull: {
export: "./test-results/newman-" + collectionName + ".xml"
}
}
},
function (err, summary) {
var failures = err ? 1 : summary.run.failures.length;
results.push({ name: collectionName, failures: failures });
runNext(index + 1);
}
);
}
runNext(0);
Environment-Specific API Testing
Different environments need different configurations. Base URLs change, auth tokens differ, and test data varies. Handle this with environment files and pipeline variables.
// config/test-config.js
var configs = {
local: {
baseUrl: "http://localhost:3000",
apiKey: process.env.LOCAL_API_KEY || "dev-key-123",
timeout: 5000
},
staging: {
baseUrl: "https://api-staging.example.com",
apiKey: process.env.STAGING_API_KEY,
timeout: 10000
},
production: {
baseUrl: "https://api.example.com",
apiKey: process.env.PRODUCTION_API_KEY,
timeout: 15000
}
};
var env = process.env.TEST_ENVIRONMENT || "local";
var config = configs[env];
if (!config) {
console.error("Unknown test environment: " + env);
process.exit(1);
}
module.exports = config;
Use pipeline variables to inject secrets:
variables:
- group: api-test-secrets
- name: TEST_ENVIRONMENT
value: "staging"
steps:
- script: |
npm test
env:
STAGING_API_KEY: $(StagingApiKey)
TEST_ENVIRONMENT: $(TEST_ENVIRONMENT)
Performance API Testing with k6
k6 is a modern load testing tool that fits naturally into CI/CD pipelines. It uses JavaScript for test scripts and produces structured output.
Install k6 and write a basic load test:
// tests/performance/load-test.js
import http from "k6/http";
import { check, sleep } from "k6";
export var options = {
stages: [
{ duration: "30s", target: 20 },
{ duration: "1m", target: 50 },
{ duration: "30s", target: 0 }
],
thresholds: {
http_req_duration: ["p(95)<500"],
http_req_failed: ["rate<0.01"]
}
};
export default function () {
var baseUrl = __ENV.BASE_URL || "http://localhost:3000";
var listResponse = http.get(baseUrl + "/api/items");
check(listResponse, {
"list status is 200": function (r) { return r.status === 200; },
"list response time < 200ms": function (r) { return r.timings.duration < 200; }
});
sleep(1);
var createPayload = JSON.stringify({
name: "Load Test Item",
price: 9.99
});
var params = { headers: { "Content-Type": "application/json" } };
var createResponse = http.post(baseUrl + "/api/items", createPayload, params);
check(createResponse, {
"create status is 201": function (r) { return r.status === 201; }
});
sleep(1);
}
Add k6 to your pipeline:
- script: |
curl -L https://github.com/grafana/k6/releases/download/v0.47.0/k6-v0.47.0-linux-amd64.tar.gz | tar xz
./k6-v0.47.0-linux-amd64/k6 run tests/performance/load-test.js --out json=test-results/k6-results.json --env BASE_URL=$(STAGING_URL)
displayName: "Run k6 Performance Tests"
continueOnError: true
API Mocking Strategies
When testing APIs that depend on third-party services, mocking is essential. Nock is the standard choice for HTTP mocking in Node.js.
// test/mocking-example.test.js
var nock = require("nock");
var request = require("supertest");
var expect = require("chai").expect;
var app = require("../app");
describe("API with External Dependencies", function () {
beforeEach(function () {
nock("https://payment-gateway.example.com")
.post("/v1/charges")
.reply(200, {
id: "ch_mock_123",
status: "succeeded",
amount: 999
});
nock("https://inventory-service.example.com")
.get("/api/stock/1")
.reply(200, {
itemId: 1,
quantity: 42,
warehouse: "us-west-2"
});
});
afterEach(function () {
nock.cleanAll();
});
it("should process an order with mocked payment", function (done) {
request(app)
.post("/api/orders")
.send({ itemId: 1, quantity: 1, paymentMethod: "card_test" })
.expect(201)
.end(function (err, res) {
if (err) return done(err);
expect(res.body.status).to.equal("confirmed");
expect(res.body.chargeId).to.equal("ch_mock_123");
done();
});
});
it("should handle payment gateway failures gracefully", function (done) {
nock.cleanAll();
nock("https://payment-gateway.example.com")
.post("/v1/charges")
.reply(503, { error: "Service unavailable" });
nock("https://inventory-service.example.com")
.get("/api/stock/1")
.reply(200, { itemId: 1, quantity: 42 });
request(app)
.post("/api/orders")
.send({ itemId: 1, quantity: 1, paymentMethod: "card_test" })
.expect(502)
.end(function (err, res) {
if (err) return done(err);
expect(res.body.error).to.include("payment");
done();
});
});
});
OpenAPI/Swagger Validation Testing
Validating your API responses against an OpenAPI spec catches drift between documentation and implementation. Use the openapi-schema-validator or swagger-parser packages for this.
// test/openapi-validation.test.js
var SwaggerParser = require("@apidevtools/swagger-parser");
var request = require("supertest");
var expect = require("chai").expect;
var Ajv = require("ajv");
var addFormats = require("ajv-formats");
var app = require("../app");
var ajv = new Ajv({ allErrors: true, strict: false });
addFormats(ajv);
var apiSpec;
before(function (done) {
SwaggerParser.dereference("./openapi.yaml")
.then(function (spec) {
apiSpec = spec;
done();
})
.catch(done);
});
function validateResponse(path, method, statusCode, body) {
var pathSpec = apiSpec.paths[path];
if (!pathSpec) throw new Error("Path not found in spec: " + path);
var opSpec = pathSpec[method];
if (!opSpec) throw new Error("Method not found: " + method + " " + path);
var resSpec = opSpec.responses[String(statusCode)];
if (!resSpec) throw new Error("Status " + statusCode + " not in spec for " + method + " " + path);
var schema = resSpec.content["application/json"].schema;
var validate = ajv.compile(schema);
var valid = validate(body);
if (!valid) {
var errors = validate.errors.map(function (e) {
return e.instancePath + " " + e.message;
});
throw new Error("Schema validation failed:\n" + errors.join("\n"));
}
return true;
}
describe("OpenAPI Spec Compliance", function () {
it("GET /api/items matches spec", function (done) {
request(app)
.get("/api/items")
.expect(200)
.end(function (err, res) {
if (err) return done(err);
expect(function () {
validateResponse("/api/items", "get", 200, res.body);
}).to.not.throw();
done();
});
});
it("POST /api/items 400 matches error spec", function (done) {
request(app)
.post("/api/items")
.send({})
.expect(400)
.end(function (err, res) {
if (err) return done(err);
expect(function () {
validateResponse("/api/items", "post", 400, res.body);
}).to.not.throw();
done();
});
});
});
API Security Testing Basics
Security testing should be part of every API pipeline. Start with these automated checks.
Header validation ensures your API sets the right security headers:
// test/security.test.js
var request = require("supertest");
var expect = require("chai").expect;
var app = require("../app");
describe("API Security Headers", function () {
it("should not expose server technology", function (done) {
request(app)
.get("/api/items")
.end(function (err, res) {
if (err) return done(err);
expect(res.headers["x-powered-by"]).to.be.undefined;
done();
});
});
it("should set security headers", function (done) {
request(app)
.get("/api/items")
.end(function (err, res) {
if (err) return done(err);
expect(res.headers["x-content-type-options"]).to.equal("nosniff");
expect(res.headers["x-frame-options"]).to.equal("DENY");
done();
});
});
it("should reject oversized payloads", function (done) {
var largePayload = { name: "x".repeat(1000000), price: 1 };
request(app)
.post("/api/items")
.send(largePayload)
.expect(413)
.end(function (err, res) {
done(err);
});
});
it("should require authentication on protected routes", function (done) {
request(app)
.get("/api/admin/users")
.expect(401)
.end(function (err, res) {
if (err) return done(err);
expect(res.body.error).to.include("unauthorized");
done();
});
});
});
For deeper security scanning, integrate OWASP ZAP into your pipeline as a separate stage:
- stage: SecurityScan
jobs:
- job: ZAPScan
pool:
vmImage: "ubuntu-latest"
steps:
- script: |
docker run --rm -v $(pwd)/test-results:/zap/wrk \
ghcr.io/zaproxy/zaproxy:stable zap-api-scan.py \
-t $(STAGING_URL)/openapi.json \
-f openapi \
-r zap-report.html \
-w zap-report.md
displayName: "OWASP ZAP API Scan"
continueOnError: true
- publish: $(System.DefaultWorkingDirectory)/test-results/zap-report.html
artifact: zap-security-report
Building a Custom API Test Framework with Node.js
When off-the-shelf tools do not fit your needs, building a lightweight custom framework gives you full control. Here is a reusable test runner that handles authentication, environment switching, and structured reporting.
// lib/api-test-runner.js
var http = require("http");
var https = require("https");
var url = require("url");
var fs = require("fs");
function ApiTestRunner(options) {
this.baseUrl = options.baseUrl;
this.headers = options.headers || {};
this.results = [];
this.timeout = options.timeout || 10000;
}
ApiTestRunner.prototype.request = function (method, path, body, callback) {
var parsed = url.parse(this.baseUrl + path);
var client = parsed.protocol === "https:" ? https : http;
var self = this;
var opts = {
hostname: parsed.hostname,
port: parsed.port,
path: parsed.path,
method: method,
headers: Object.assign({}, self.headers, {
"Content-Type": "application/json"
}),
timeout: self.timeout
};
var startTime = Date.now();
var req = client.request(opts, function (res) {
var data = "";
res.on("data", function (chunk) { data += chunk; });
res.on("end", function () {
var duration = Date.now() - startTime;
var parsed;
try {
parsed = JSON.parse(data);
} catch (e) {
parsed = data;
}
callback(null, {
status: res.statusCode,
headers: res.headers,
body: parsed,
duration: duration
});
});
});
req.on("error", function (err) { callback(err); });
req.on("timeout", function () {
req.destroy();
callback(new Error("Request timed out after " + self.timeout + "ms"));
});
if (body) {
req.write(JSON.stringify(body));
}
req.end();
};
ApiTestRunner.prototype.test = function (name, method, path, options, assertions) {
var self = this;
var body = options.body || null;
var startTime = Date.now();
self.request(method, path, body, function (err, response) {
var result = {
name: name,
method: method,
path: path,
duration: Date.now() - startTime,
passed: false,
error: null
};
if (err) {
result.error = err.message;
self.results.push(result);
return;
}
try {
assertions(response);
result.passed = true;
} catch (e) {
result.error = e.message;
}
self.results.push(result);
});
};
ApiTestRunner.prototype.generateJUnitXml = function () {
var totalTests = this.results.length;
var failures = this.results.filter(function (r) { return !r.passed; }).length;
var totalTime = this.results.reduce(function (sum, r) { return sum + r.duration; }, 0) / 1000;
var xml = '<?xml version="1.0" encoding="UTF-8"?>\n';
xml += '<testsuites tests="' + totalTests + '" failures="' + failures + '" time="' + totalTime + '">\n';
xml += ' <testsuite name="API Tests" tests="' + totalTests + '" failures="' + failures + '">\n';
this.results.forEach(function (r) {
xml += ' <testcase name="' + r.name + '" time="' + (r.duration / 1000) + '"';
if (r.passed) {
xml += " />\n";
} else {
xml += ">\n";
xml += ' <failure message="' + (r.error || "").replace(/"/g, """) + '" />\n';
xml += " </testcase>\n";
}
});
xml += " </testsuite>\n</testsuites>";
return xml;
};
ApiTestRunner.prototype.saveResults = function (filePath) {
var xml = this.generateJUnitXml();
fs.writeFileSync(filePath, xml);
console.log("Results saved to " + filePath);
};
module.exports = ApiTestRunner;
Use the custom runner:
// tests/custom-api-tests.js
var ApiTestRunner = require("../lib/api-test-runner");
var runner = new ApiTestRunner({
baseUrl: process.env.API_BASE_URL || "http://localhost:3000",
headers: {
Authorization: "Bearer " + (process.env.API_TOKEN || "test-token")
},
timeout: 5000
});
runner.test("List items returns 200", "GET", "/api/items", {}, function (res) {
if (res.status !== 200) throw new Error("Expected 200, got " + res.status);
if (!Array.isArray(res.body)) throw new Error("Expected array response");
});
runner.test("Create item returns 201", "POST", "/api/items", {
body: { name: "Custom Test", price: 19.99 }
}, function (res) {
if (res.status !== 201) throw new Error("Expected 201, got " + res.status);
if (!res.body.id) throw new Error("Missing id in response");
});
runner.test("Invalid item returns 400", "POST", "/api/items", {
body: {}
}, function (res) {
if (res.status !== 400) throw new Error("Expected 400, got " + res.status);
});
// Wait for all async tests to complete, then save
setTimeout(function () {
runner.saveResults("./test-results/custom-tests.xml");
var failed = runner.results.filter(function (r) { return !r.passed; });
if (failed.length > 0) {
console.error(failed.length + " test(s) failed");
process.exit(1);
}
console.log("All " + runner.results.length + " tests passed");
}, 10000);
Complete Working Example
Here is a full Azure Pipeline that orchestrates supertest unit tests, Newman collection tests, contract validation, and result publishing in a single pipeline.
# azure-pipelines.yml
trigger:
branches:
include:
- main
- develop
paths:
include:
- src/**
- tests/**
- openapi.yaml
pool:
vmImage: "ubuntu-latest"
variables:
- group: api-test-secrets
- name: NODE_VERSION
value: "18.x"
- name: TEST_ENVIRONMENT
value: "staging"
stages:
- stage: UnitTests
displayName: "Unit & Contract Tests"
jobs:
- job: RunUnitTests
displayName: "Supertest & Contract Tests"
steps:
- task: NodeTool@0
inputs:
versionSpec: $(NODE_VERSION)
displayName: "Install Node.js"
- script: npm ci
displayName: "Install Dependencies"
- script: mkdir -p test-results
displayName: "Create Results Directory"
- script: |
npx mocha \
--reporter mocha-junit-reporter \
--reporter-options mochaFile=./test-results/unit-tests.xml \
test/**/*.test.js
displayName: "Run Supertest Unit Tests"
env:
NODE_ENV: test
- script: |
npx mocha \
--reporter mocha-junit-reporter \
--reporter-options mochaFile=./test-results/contract-tests.xml \
test/contract.test.js test/openapi-validation.test.js
displayName: "Run Contract Validation Tests"
env:
NODE_ENV: test
- script: |
npx mocha \
--reporter mocha-junit-reporter \
--reporter-options mochaFile=./test-results/security-tests.xml \
test/security.test.js
displayName: "Run Security Tests"
env:
NODE_ENV: test
- task: PublishTestResults@2
displayName: "Publish Unit Test Results"
inputs:
testResultsFormat: "JUnit"
testResultsFiles: "**/unit-tests.xml"
searchFolder: "$(System.DefaultWorkingDirectory)/test-results"
testRunTitle: "API Unit Tests - $(Build.BuildNumber)"
mergeTestResults: true
condition: always()
- task: PublishTestResults@2
displayName: "Publish Contract Test Results"
inputs:
testResultsFormat: "JUnit"
testResultsFiles: "**/contract-tests.xml"
searchFolder: "$(System.DefaultWorkingDirectory)/test-results"
testRunTitle: "API Contract Tests - $(Build.BuildNumber)"
mergeTestResults: true
condition: always()
- task: PublishTestResults@2
displayName: "Publish Security Test Results"
inputs:
testResultsFormat: "JUnit"
testResultsFiles: "**/security-tests.xml"
searchFolder: "$(System.DefaultWorkingDirectory)/test-results"
testRunTitle: "API Security Tests - $(Build.BuildNumber)"
mergeTestResults: true
condition: always()
- stage: IntegrationTests
displayName: "Newman Integration Tests"
dependsOn: UnitTests
condition: succeeded()
jobs:
- job: RunNewmanTests
displayName: "Newman Collection Tests"
steps:
- task: NodeTool@0
inputs:
versionSpec: $(NODE_VERSION)
displayName: "Install Node.js"
- script: |
npm ci
npm install -g newman newman-reporter-junitfull
displayName: "Install Dependencies"
- script: mkdir -p test-results
displayName: "Create Results Directory"
- script: |
newman run tests/postman/collections/items-api.json \
-e tests/postman/environments/$(TEST_ENVIRONMENT).json \
--reporters cli,junitfull \
--reporter-junitfull-export ./test-results/newman-items.xml \
--timeout-request 10000 \
--delay-request 100
displayName: "Run Items API Collection"
env:
API_KEY: $(StagingApiKey)
- script: |
newman run tests/postman/collections/auth-api.json \
-e tests/postman/environments/$(TEST_ENVIRONMENT).json \
--reporters cli,junitfull \
--reporter-junitfull-export ./test-results/newman-auth.xml \
--timeout-request 10000
displayName: "Run Auth API Collection"
env:
API_KEY: $(StagingApiKey)
- task: PublishTestResults@2
displayName: "Publish Newman Test Results"
inputs:
testResultsFormat: "JUnit"
testResultsFiles: "**/newman-*.xml"
searchFolder: "$(System.DefaultWorkingDirectory)/test-results"
testRunTitle: "API Integration Tests (Newman) - $(Build.BuildNumber)"
mergeTestResults: true
condition: always()
- stage: PerformanceTests
displayName: "Performance Tests"
dependsOn: IntegrationTests
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- job: RunK6Tests
displayName: "k6 Load Tests"
steps:
- script: |
curl -L https://github.com/grafana/k6/releases/download/v0.47.0/k6-v0.47.0-linux-amd64.tar.gz | tar xz
sudo cp k6-v0.47.0-linux-amd64/k6 /usr/local/bin/
displayName: "Install k6"
- script: |
mkdir -p test-results
k6 run tests/performance/load-test.js \
--out json=test-results/k6-results.json \
--env BASE_URL=$(STAGING_URL) \
--summary-export=test-results/k6-summary.json
displayName: "Run k6 Load Tests"
continueOnError: true
- publish: $(System.DefaultWorkingDirectory)/test-results
artifact: performance-test-results
displayName: "Publish Performance Results"
condition: always()
This pipeline runs in three stages. Unit and contract tests run first since they are fast and catch most issues. Newman integration tests run next against the staging environment. Performance tests only run on the main branch to avoid unnecessary load on shared environments.
Common Issues and Troubleshooting
Test Results Not Appearing in Azure DevOps
The most common reason is that the PublishTestResults task gets skipped when a previous step fails. Always add condition: always() to your publish steps. Also verify that the searchFolder path matches where your test runner actually writes the XML files. A typo in the path means Azure DevOps silently finds zero results.
Newman Failing with Connection Refused
If Newman tests fail with ECONNREFUSED, the API server is not running or not reachable from the build agent. For integration tests against a deployed environment, verify the URL is correct and that the build agent's IP is whitelisted. For tests against a local server, you need to start the server before running Newman and use localhost with the correct port.
Supertest Tests Hanging and Timing Out
This happens when your Express app opens persistent connections (database pools, WebSocket connections, or intervals) that prevent Node from exiting. Either close those connections in your test teardown, or use Mocha's --exit flag as a workaround. The proper fix is to export a close() function from your app that cleans up all connections.
// In your test setup
after(function (done) {
if (app.close) {
app.close(done);
} else {
done();
}
});
JUnit XML Parsing Errors
Azure DevOps occasionally rejects JUnit XML files that have special characters in test names or error messages. Sanitize your test names to avoid angle brackets, ampersands, and quotes. If using a custom reporter, make sure you XML-encode all dynamic content in the output.
Newman Environment Variables Not Resolving
Postman environment files use {{variable}} syntax. If variables are not resolving, check that your environment JSON file has the variables defined with enabled: true. Also verify that pipeline variables are being passed correctly through the env block in your YAML step. Newman does not automatically inherit shell environment variables into Postman variables.
Best Practices
Run fast tests first. Structure your pipeline so unit tests run before integration tests. Fail fast and save pipeline minutes.
Always publish test results, even on failure. Use
condition: always()on everyPublishTestResultstask. Failed test results are the most important ones to see.Version your test collections alongside your code. Postman collections exported as JSON should live in the same repository as the API they test. This keeps tests and code in sync.
Use separate pipeline stages for different test types. Unit tests, integration tests, and performance tests have different resource needs and failure modes. Separate stages let you control flow and parallelism.
Mock external dependencies in unit tests. Use nock or similar libraries to isolate your API from third-party services. Flaky third-party APIs should never break your build.
Validate against your OpenAPI spec automatically. Contract drift between your spec and implementation is inevitable without automation. Run schema validation on every commit.
Set meaningful timeouts. Both at the test level and the pipeline level. A 30-second timeout on a health check test is too generous. A 5-second timeout on a complex aggregation query might be too tight.
Gate deployments on test results. Configure your release pipeline to only promote to production if all test stages pass. This is the entire point of CI/CD.
Rotate test API keys and tokens regularly. Store them in Azure DevOps variable groups with secret protection. Never hardcode credentials in pipeline YAML or test files.
Run performance tests only on main or release branches. Load tests consume real resources and can affect shared environments. Only run them when you are actually preparing to deploy.