Test Environments and Configuration Management
A practical guide to managing test environments and configurations in Azure DevOps, covering environment provisioning, configuration variables across stages, environment-specific test data, infrastructure as code for test environments, secrets management, and automated environment teardown.
Test Environments and Configuration Management
Overview
Tests are only as reliable as the environments they run against. A test suite that passes against a development database but fails against staging because of schema differences, missing seed data, or misconfigured feature flags is not testing the application -- it is testing the environment setup. Azure DevOps provides Pipeline Environments, variable groups, and deployment gates for managing test infrastructure, but the real challenge is keeping environment configurations consistent, reproducible, and isolated between test runs.
I have debugged more "test failures" that turned out to be environment problems than actual application bugs. The database was not seeded, the API key was expired, the feature flag was set differently than production, the SSL certificate was self-signed and the HTTP client rejected it. Every one of these is a configuration management failure, not a test failure. This article covers how to build test environments that are provisioned automatically, configured consistently, and torn down after use so the next test run starts clean.
Prerequisites
- An Azure DevOps organization with Azure Pipelines
- Familiarity with YAML pipeline syntax and multi-stage deployments
- Basic understanding of infrastructure concepts (VMs, containers, databases)
- Azure subscription or DigitalOcean account for infrastructure provisioning
- Node.js 18+ for automation scripts
- Understanding of variable groups and service connections in Azure DevOps
Pipeline Environments in Azure DevOps
Azure DevOps Pipeline Environments are named targets for deployment stages. They provide approval gates, deployment history, and resource tracking. For testing, environments serve as guardrails that ensure tests run against the correct infrastructure.
Creating Environments
Navigate to Pipelines > Environments and create environments for each testing stage:
Environments:
- dev (automatic deployment, no approvals)
- staging (automatic deployment, post-deployment tests)
- uat (manual approval required before deployment)
- production (manual approval + sign-off required)
Using Environments in YAML
stages:
- stage: DeployDev
jobs:
- deployment: DeployToDev
environment: dev
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to dev environment"
- stage: TestDev
dependsOn: DeployDev
jobs:
- job: RunTests
steps:
- script: npm test
env:
BASE_URL: https://dev.example.com
DB_HOST: dev-db.example.com
- stage: DeployStagin
dependsOn: TestDev
jobs:
- deployment: DeployToStaging
environment: staging
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to staging"
- stage: TestStaging
dependsOn: DeployStagin
jobs:
- job: RunStagingTests
steps:
- script: npm run test:integration
env:
BASE_URL: https://staging.example.com
DB_HOST: staging-db.example.com
Environment Approvals and Gates
Configure approvals on the staging and production environments:
- Open the environment in Pipelines > Environments
- Click the three-dot menu > Approvals and checks
- Add approvers for manual approval
- Add automated checks:
- Invoke Azure Function: Call a health check endpoint
- Query Azure Monitor alerts: Verify no active alerts
- Evaluate artifact: Check that required artifacts exist
For test environments, automated checks are more useful than manual approvals:
# The environment "staging" has a pre-deployment check:
# Invoke REST API: GET https://staging.example.com/health
# Expected response: 200 with body containing "healthy"
# Timeout: 5 minutes
# Evaluation interval: 30 seconds
This ensures tests do not start until the deployment is healthy.
Variable Groups for Environment Configuration
Variable groups store configuration values that vary between environments. Create one variable group per environment:
Setting Up Variable Groups
Variable Group: "dev-config"
BASE_URL = https://dev.example.com
DB_HOST = dev-db.internal
DB_NAME = app_dev
REDIS_URL = redis://dev-cache.internal:6379
FEATURE_NEW_CHECKOUT = true
LOG_LEVEL = debug
Variable Group: "staging-config"
BASE_URL = https://staging.example.com
DB_HOST = staging-db.internal
DB_NAME = app_staging
REDIS_URL = redis://staging-cache.internal:6379
FEATURE_NEW_CHECKOUT = true
LOG_LEVEL = info
Variable Group: "production-config"
BASE_URL = https://app.example.com
DB_HOST = prod-db.internal
DB_NAME = app_production
REDIS_URL = redis://prod-cache.internal:6379
FEATURE_NEW_CHECKOUT = false
LOG_LEVEL = warn
Linking Variable Groups to Stages
stages:
- stage: TestDev
variables:
- group: dev-config
jobs:
- job: RunTests
steps:
- script: npm test
env:
BASE_URL: $(BASE_URL)
DB_HOST: $(DB_HOST)
- stage: TestStaging
variables:
- group: staging-config
jobs:
- job: RunTests
steps:
- script: npm test
env:
BASE_URL: $(BASE_URL)
DB_HOST: $(DB_HOST)
Secrets in Variable Groups
Mark sensitive values as secrets (they become masked in logs):
Variable Group: "dev-secrets" (linked to Azure Key Vault)
DB_PASSWORD = ********
API_KEY = ********
JWT_SECRET = ********
SMTP_PASSWORD = ********
Link variable groups to Azure Key Vault for centralized secret management:
- Create a variable group
- Toggle "Link secrets from an Azure key vault"
- Select the Azure subscription and Key Vault
- Choose which secrets to include
- Secrets are fetched at pipeline runtime -- they are never stored in Azure DevOps
Environment-Specific Test Data
Tests need data, and that data must match the environment. A test that creates a user in the dev database should not affect staging. A test that expects specific seed data must have that data present.
Seed Data Management
Create environment-specific seed scripts:
// seeds/seed-test-data.js
var pg = require("pg");
var ENV = process.env.NODE_ENV || "development";
var DB_URL = process.env.DATABASE_URL;
var pool = new pg.Pool({ connectionString: DB_URL });
var seedData = {
users: [
{ email: "[email protected]", role: "admin", name: "Test Admin" },
{ email: "[email protected]", role: "manager", name: "Test Manager" },
{ email: "[email protected]", role: "viewer", name: "Test Viewer" },
],
projects: [
{ name: "Test Project Alpha", status: "active" },
{ name: "Test Project Beta", status: "archived" },
],
};
function seedDatabase() {
console.log("Seeding " + ENV + " database...");
return pool.query("BEGIN")
.then(function () {
// Clean existing test data
return pool.query("DELETE FROM users WHERE email LIKE '%@test.com'");
})
.then(function () {
return pool.query("DELETE FROM projects WHERE name LIKE 'Test Project%'");
})
.then(function () {
// Insert users
var userPromises = seedData.users.map(function (user) {
return pool.query(
"INSERT INTO users (email, role, name, password_hash) VALUES ($1, $2, $3, $4)",
[user.email, user.role, user.name, "$2b$10$testhashedpassword"]
);
});
return Promise.all(userPromises);
})
.then(function () {
console.log(" Inserted " + seedData.users.length + " test users");
// Insert projects
var projectPromises = seedData.projects.map(function (project) {
return pool.query(
"INSERT INTO projects (name, status, created_by) VALUES ($1, $2, '[email protected]')",
[project.name, project.status]
);
});
return Promise.all(projectPromises);
})
.then(function () {
console.log(" Inserted " + seedData.projects.length + " test projects");
return pool.query("COMMIT");
})
.then(function () {
console.log("Seed complete for " + ENV);
})
.catch(function (err) {
return pool.query("ROLLBACK").then(function () {
throw err;
});
})
.finally(function () {
return pool.end();
});
}
seedDatabase().catch(function (err) {
console.error("Seed failed: " + err.message);
process.exit(1);
});
Pipeline Integration for Seed Data
- stage: TestDev
variables:
- group: dev-config
- group: dev-secrets
jobs:
- job: SeedAndTest
steps:
- script: npm ci
displayName: Install dependencies
- script: node seeds/seed-test-data.js
displayName: Seed test database
env:
NODE_ENV: development
DATABASE_URL: postgresql://$(DB_USER):$(DB_PASSWORD)@$(DB_HOST)/$(DB_NAME)
- script: npm test
displayName: Run tests
env:
BASE_URL: $(BASE_URL)
DATABASE_URL: postgresql://$(DB_USER):$(DB_PASSWORD)@$(DB_HOST)/$(DB_NAME)
- script: node seeds/cleanup-test-data.js
displayName: Clean up test data
condition: always()
env:
DATABASE_URL: postgresql://$(DB_USER):$(DB_PASSWORD)@$(DB_HOST)/$(DB_NAME)
Infrastructure as Code for Test Environments
Docker Compose for Local Test Environments
# docker-compose.test.yml
version: "3.8"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
NODE_ENV: test
DATABASE_URL: postgresql://testuser:testpass@db:5432/testdb
REDIS_URL: redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
db:
image: postgres:16
environment:
POSTGRES_DB: testdb
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpass
ports:
- "5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U testuser -d testdb"]
interval: 5s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
ports:
- "6379:6379"
Pipeline with Docker Compose
steps:
- script: docker compose -f docker-compose.test.yml up -d --wait
displayName: Start test environment
- script: docker compose -f docker-compose.test.yml exec app node seeds/seed-test-data.js
displayName: Seed test data
- script: docker compose -f docker-compose.test.yml exec app npm test
displayName: Run tests
continueOnError: true
- task: PublishTestResults@2
inputs:
testResultsFormat: JUnit
testResultsFiles: "**/junit.xml"
condition: always()
- script: docker compose -f docker-compose.test.yml down -v
displayName: Tear down test environment
condition: always()
The -v flag removes volumes on teardown, ensuring the next test run starts with a clean database.
Ephemeral Cloud Environments
For integration and end-to-end tests, provision ephemeral cloud environments that are created per pipeline run and destroyed after:
// infra/provision-test-env.js
var https = require("https");
var DO_TOKEN = process.env.DIGITALOCEAN_TOKEN;
var BUILD_ID = process.env.BUILD_BUILDID || "local";
var ENV_NAME = "test-" + BUILD_ID;
function doRequest(method, path, body) {
return new Promise(function (resolve, reject) {
var options = {
hostname: "api.digitalocean.com",
path: "/v2" + path,
method: method,
headers: {
"Content-Type": "application/json",
Authorization: "Bearer " + DO_TOKEN,
},
};
var req = https.request(options, function (res) {
var data = "";
res.on("data", function (chunk) { data += chunk; });
res.on("end", function () {
if (res.statusCode >= 200 && res.statusCode < 300) {
resolve(data ? JSON.parse(data) : null);
} else {
reject(new Error(method + " " + path + ": " + res.statusCode + " " + data));
}
});
});
req.on("error", reject);
if (body) { req.write(JSON.stringify(body)); }
req.end();
});
}
function provisionDatabase() {
console.log("Provisioning test database: " + ENV_NAME);
// For ephemeral environments, use a shared managed database
// and create a new database per test run
return doRequest("POST", "/databases/" + process.env.DB_CLUSTER_ID + "/dbs", {
name: ENV_NAME.replace(/-/g, "_"),
}).then(function (response) {
console.log("Database created: " + response.db.name);
return response.db;
});
}
function destroyDatabase() {
var dbName = ENV_NAME.replace(/-/g, "_");
console.log("Destroying test database: " + dbName);
return doRequest(
"DELETE",
"/databases/" + process.env.DB_CLUSTER_ID + "/dbs/" + dbName
).then(function () {
console.log("Database destroyed: " + dbName);
});
}
var action = process.argv[2];
if (action === "provision") {
provisionDatabase()
.then(function (db) {
// Output the connection string for pipeline consumption
var connStr = "postgresql://" + process.env.DB_USER + ":" +
process.env.DB_PASSWORD + "@" + process.env.DB_HOST + ":" +
process.env.DB_PORT + "/" + db.name + "?sslmode=require";
console.log("##vso[task.setvariable variable=TEST_DATABASE_URL]" + connStr);
console.log("Environment provisioned: " + ENV_NAME);
})
.catch(function (err) {
console.error("Provision failed: " + err.message);
process.exit(1);
});
} else if (action === "destroy") {
destroyDatabase()
.catch(function (err) {
console.error("Destroy failed: " + err.message);
process.exit(1);
});
} else {
console.log("Usage: node provision-test-env.js [provision|destroy]");
}
Pipeline with Ephemeral Environments
stages:
- stage: IntegrationTest
variables:
- group: do-credentials
jobs:
- job: RunIntegrationTests
steps:
- script: node infra/provision-test-env.js provision
displayName: Provision test database
env:
DIGITALOCEAN_TOKEN: $(DO_TOKEN)
DB_CLUSTER_ID: $(DB_CLUSTER_ID)
DB_USER: $(DB_USER)
DB_PASSWORD: $(DB_PASSWORD)
DB_HOST: $(DB_HOST)
DB_PORT: $(DB_PORT)
- script: node seeds/seed-test-data.js
displayName: Seed test data
env:
DATABASE_URL: $(TEST_DATABASE_URL)
- script: npm run test:integration
displayName: Run integration tests
env:
DATABASE_URL: $(TEST_DATABASE_URL)
BASE_URL: https://staging.example.com
continueOnError: true
- task: PublishTestResults@2
inputs:
testResultsFormat: JUnit
testResultsFiles: "**/junit.xml"
condition: always()
- script: node infra/provision-test-env.js destroy
displayName: Destroy test database
condition: always()
env:
DIGITALOCEAN_TOKEN: $(DO_TOKEN)
DB_CLUSTER_ID: $(DB_CLUSTER_ID)
Feature Flags in Test Environments
Feature flags add another dimension to environment configuration. Tests must account for feature flag states:
// config/feature-flags.js
var FLAGS = {
development: {
NEW_CHECKOUT: true,
DARK_MODE: true,
AI_RECOMMENDATIONS: true,
BETA_DASHBOARD: true,
},
staging: {
NEW_CHECKOUT: true,
DARK_MODE: true,
AI_RECOMMENDATIONS: false,
BETA_DASHBOARD: false,
},
production: {
NEW_CHECKOUT: false,
DARK_MODE: true,
AI_RECOMMENDATIONS: false,
BETA_DASHBOARD: false,
},
};
function getFlags(env) {
return FLAGS[env] || FLAGS.development;
}
module.exports = { getFlags: getFlags };
In pipeline variable groups, set feature flags to match the environment:
# Variable Group: staging-config
FEATURE_NEW_CHECKOUT: "true"
FEATURE_DARK_MODE: "true"
FEATURE_AI_RECOMMENDATIONS: "false"
Tests that depend on feature flags should check the flag state before executing:
var flags = require("../config/feature-flags");
test("new checkout flow displays payment options", function () {
var envFlags = flags.getFlags(process.env.NODE_ENV);
if (!envFlags.NEW_CHECKOUT) {
console.log("Skipping: NEW_CHECKOUT flag is off in " + process.env.NODE_ENV);
return;
}
// Test the new checkout flow
});
Complete Working Example
A comprehensive environment configuration manager that validates configurations, compares environments, and detects drift:
// tools/env-config-manager.js
var https = require("https");
var url = require("url");
var ORG = "my-organization";
var PROJECT = "my-project";
var PAT = process.env.AZURE_DEVOPS_PAT;
var API_VERSION = "7.1";
var BASE_URL = "https://dev.azure.com/" + ORG + "/" + PROJECT;
var AUTH = "Basic " + Buffer.from(":" + PAT).toString("base64");
function makeRequest(method, path) {
return new Promise(function (resolve, reject) {
var fullUrl = path.indexOf("https://") === 0 ? path : BASE_URL + path;
var parsed = url.parse(fullUrl);
var options = {
hostname: parsed.hostname,
path: parsed.path,
method: method,
headers: {
Authorization: AUTH,
Accept: "application/json",
},
};
var req = https.request(options, function (res) {
var data = "";
res.on("data", function (chunk) { data += chunk; });
res.on("end", function () {
if (res.statusCode >= 200 && res.statusCode < 300) {
resolve(data ? JSON.parse(data) : null);
} else {
reject(new Error(res.statusCode + " " + data));
}
});
});
req.on("error", reject);
req.end();
});
}
function getVariableGroups() {
return makeRequest(
"GET",
"/_apis/distributedtask/variablegroups?api-version=" + API_VERSION
).then(function (response) {
return response.value || [];
});
}
function compareEnvironments(env1Name, env2Name) {
return getVariableGroups().then(function (groups) {
var env1 = groups.find(function (g) { return g.name === env1Name; });
var env2 = groups.find(function (g) { return g.name === env2Name; });
if (!env1) { throw new Error("Variable group not found: " + env1Name); }
if (!env2) { throw new Error("Variable group not found: " + env2Name); }
var vars1 = env1.variables || {};
var vars2 = env2.variables || {};
var allKeys = Object.keys(vars1).concat(Object.keys(vars2));
var uniqueKeys = allKeys.filter(function (key, index) {
return allKeys.indexOf(key) === index;
});
console.log("=== Environment Comparison: " + env1Name + " vs " + env2Name + " ===\n");
var differences = [];
var missing = [];
var matching = [];
uniqueKeys.sort().forEach(function (key) {
var val1 = vars1[key] ? (vars1[key].isSecret ? "********" : vars1[key].value) : undefined;
var val2 = vars2[key] ? (vars2[key].isSecret ? "********" : vars2[key].value) : undefined;
if (val1 === undefined) {
missing.push({ key: key, missingFrom: env1Name, presentIn: env2Name });
} else if (val2 === undefined) {
missing.push({ key: key, missingFrom: env2Name, presentIn: env1Name });
} else if (val1 !== val2) {
differences.push({ key: key, env1: val1, env2: val2 });
} else {
matching.push(key);
}
});
if (differences.length > 0) {
console.log("DIFFERENT VALUES:");
differences.forEach(function (d) {
console.log(" " + d.key + ":");
console.log(" " + env1Name + ": " + d.env1);
console.log(" " + env2Name + ": " + d.env2);
});
console.log("");
}
if (missing.length > 0) {
console.log("MISSING VARIABLES:");
missing.forEach(function (m) {
console.log(" " + m.key + " — missing from " + m.missingFrom + " (present in " + m.presentIn + ")");
});
console.log("");
}
console.log("MATCHING: " + matching.length + " variable(s)");
console.log("DIFFERENT: " + differences.length + " variable(s)");
console.log("MISSING: " + missing.length + " variable(s)");
return {
matching: matching,
differences: differences,
missing: missing,
};
});
}
var action = process.argv[2] || "list";
var env1 = process.argv[3];
var env2 = process.argv[4];
if (action === "compare" && env1 && env2) {
compareEnvironments(env1, env2).catch(function (err) {
console.error("Error: " + err.message);
process.exit(1);
});
} else if (action === "list") {
getVariableGroups().then(function (groups) {
console.log("Variable Groups:");
groups.forEach(function (g) {
var varCount = Object.keys(g.variables || {}).length;
var secretCount = Object.keys(g.variables || {}).filter(function (k) {
return g.variables[k].isSecret;
}).length;
console.log(" " + g.name + " (" + varCount + " vars, " + secretCount + " secrets)");
});
}).catch(function (err) {
console.error("Error: " + err.message);
process.exit(1);
});
} else {
console.log("Usage:");
console.log(" node env-config-manager.js list");
console.log(" node env-config-manager.js compare <group1> <group2>");
console.log("");
console.log("Example:");
console.log(' node env-config-manager.js compare "dev-config" "staging-config"');
}
Running the comparison:
$ node env-config-manager.js compare "dev-config" "staging-config"
=== Environment Comparison: dev-config vs staging-config ===
DIFFERENT VALUES:
BASE_URL:
dev-config: https://dev.example.com
staging-config: https://staging.example.com
DB_HOST:
dev-config: dev-db.internal
staging-config: staging-db.internal
DB_NAME:
dev-config: app_dev
staging-config: app_staging
LOG_LEVEL:
dev-config: debug
staging-config: info
REDIS_URL:
dev-config: redis://dev-cache.internal:6379
staging-config: redis://staging-cache.internal:6379
MISSING VARIABLES:
DEBUG_MODE — missing from staging-config (present in dev-config)
MATCHING: 1 variable(s)
DIFFERENT: 5 variable(s)
MISSING: 1 variable(s)
Common Issues and Troubleshooting
Variables Not Available in Pipeline Steps
Variable group variables are available as environment variables in script steps but not directly in task inputs unless you use the $(VARIABLE_NAME) macro syntax. If a variable is marked as secret, it is not available as an environment variable by default -- you must explicitly map it in the env section of the step. Also check that the variable group is linked to the correct stage or job.
Environment Approval Blocking Test Runs
When an environment has manual approvals configured and a pipeline reaches a stage targeting that environment, it waits indefinitely for approval. For test environments (dev, QA), remove manual approvals and use automated checks instead. Reserve manual approvals for staging and production environments only.
Database State Leaking Between Test Runs
Parallel pipeline runs can share the same database if the environment configuration points to a shared instance. Tests from Run A and Run B interleave, causing data conflicts. Solutions: (a) use ephemeral databases per pipeline run, (b) use database transactions that roll back after each test, (c) use unique prefixes for test data based on the build ID.
Secrets Rotation Breaking Pipelines
When secrets are rotated in Key Vault, pipelines that use variable groups linked to Key Vault automatically pick up the new values -- on the next run. Currently running pipelines use the values fetched at pipeline start. If a secret is rotated mid-run, the pipeline continues with the old value. For zero-downtime rotation, update the new secret value first, then update the application to use the new name.
Best Practices
Use variable groups per environment, not per pipeline. Multiple pipelines that deploy to staging should share the same
staging-configvariable group. This ensures consistency and reduces the places you need to update when a configuration changes.Never hard-code environment URLs in tests. Always use environment variables:
process.env.BASE_URL, not"https://staging.example.com". Tests should be environment-agnostic -- the same test code runs against dev, staging, and production.Provision and destroy test infrastructure per pipeline run. Shared persistent test environments accumulate stale data, orphaned records, and configuration drift. Ephemeral environments created per run start clean and are destroyed after, eliminating "it worked yesterday" problems.
Store secrets in Azure Key Vault, not variable groups. Variable group secrets are stored in Azure DevOps. Key Vault secrets are stored in Azure's dedicated secrets management service with audit logging, rotation policies, and access controls. Link variable groups to Key Vault for production-grade secret management.
Compare environment configurations regularly. Use the comparison script to detect configuration drift between dev, staging, and production. A missing variable in staging that exists in dev will cause a deployment failure that could have been caught in advance.
Seed test data at the beginning of every test run. Do not rely on previously seeded data. Previous test runs may have modified or deleted records. A deterministic seed step at the start of every run ensures consistent test conditions.
Use Docker Compose for local test environments. Developers should be able to spin up a complete test environment locally with a single command. Docker Compose files should mirror the CI environment configuration as closely as possible.
Tag and label ephemeral resources. When provisioning cloud resources per pipeline run, tag them with the build ID and pipeline name. This makes orphaned resources (from failed cleanup steps) easy to identify and delete.