Test Environments and Configuration Management
Manage test environments in Azure DevOps with approvals, health checks, configuration management, and automated provisioning
Test Environments and Configuration Management
Overview
Test environments are the backbone of any serious deployment pipeline. Without properly configured, isolated environments that mirror production, you are flying blind and will eventually ship broken code. Azure DevOps provides a structured approach to environment management that integrates approvals, health checks, and deployment strategies directly into your pipeline, and when you combine that with Node.js tooling for provisioning and configuration, you get a system that scales from a two-person startup to an enterprise team.
Prerequisites
- Azure DevOps organization with a project configured
- Node.js v16 or later installed
- Basic familiarity with Azure DevOps Pipelines (YAML)
- Access to create environments in your Azure DevOps project
- An Azure subscription (for Kubernetes or VM resources)
azure-devops-node-apipackage installed
Azure DevOps Environments Overview
An environment in Azure DevOps is a collection of resources that you target for deployment. Unlike variable groups or service connections, environments are first-class citizens in the pipeline ecosystem. They carry their own history, approvals, and checks. Every deployment job that targets an environment gets tracked independently, so you can see exactly what was deployed where and when.
The typical progression looks like this:
Dev -> QA -> Staging -> Production
Each environment can have different approval gates, different resource configurations, and different variable sets. The key insight is that environments are not just labels. They are enforceable boundaries in your pipeline. A deployment cannot proceed to Staging unless it has passed through QA with the required approvals.
Azure DevOps supports two primary resource types within environments:
- Kubernetes - Target namespaces in AKS or any Kubernetes cluster
- Virtual Machines - Target individual VMs or VM scale sets
You can also use environments without any linked resources, purely as logical groupings with approval gates.
Creating and Managing Environments
You can create environments through the Azure DevOps UI, but for teams that manage dozens of environments, the REST API is the way to go. Here is how you create environments programmatically with Node.js:
var azdev = require("azure-devops-node-api");
var orgUrl = "https://dev.azure.com/your-organization";
var token = process.env.AZURE_DEVOPS_PAT;
function createEnvironment(projectName, envName, description) {
var authHandler = azdev.getPersonalAccessTokenHandler(token);
var connection = new azdev.WebApi(orgUrl, authHandler);
return connection.connect().then(function () {
return connection.rest.create(
orgUrl + "/" + projectName + "/_apis/distributedtask/environments?api-version=7.1",
{
name: envName,
description: description
}
);
}).then(function (response) {
console.log("Created environment:", response.result.name);
console.log("Environment ID:", response.result.id);
return response.result;
});
}
createEnvironment("MyProject", "QA-Integration", "QA integration testing environment");
For managing multiple environments, I recommend a configuration file that defines all your environments declaratively:
var environments = {
dev: {
name: "Development",
description: "Development environment for feature branches",
approvals: [],
checks: ["build-validation"],
variables: {
API_URL: "https://dev-api.example.com",
LOG_LEVEL: "debug",
FEATURE_FLAGS_ENABLED: "true"
}
},
qa: {
name: "QA-Integration",
description: "QA integration testing environment",
approvals: ["[email protected]"],
checks: ["build-validation", "unit-tests", "integration-tests"],
variables: {
API_URL: "https://qa-api.example.com",
LOG_LEVEL: "info",
FEATURE_FLAGS_ENABLED: "true"
}
},
staging: {
name: "Staging",
description: "Pre-production staging environment",
approvals: ["[email protected]", "[email protected]"],
checks: ["build-validation", "unit-tests", "integration-tests", "performance-tests"],
variables: {
API_URL: "https://staging-api.example.com",
LOG_LEVEL: "warn",
FEATURE_FLAGS_ENABLED: "false"
}
},
production: {
name: "Production",
description: "Live production environment",
approvals: ["[email protected]", "[email protected]"],
checks: ["build-validation", "unit-tests", "integration-tests", "performance-tests", "security-scan"],
variables: {
API_URL: "https://api.example.com",
LOG_LEVEL: "error",
FEATURE_FLAGS_ENABLED: "false"
}
}
};
Environment Approvals and Checks
Approvals and checks are where environments become genuinely useful. Without them, an environment is just a label. With them, it is a gate that prevents bad code from reaching users.
Azure DevOps supports several types of checks:
- Manual approvals - A designated person must approve the deployment
- Branch control - Only specific branches can deploy to this environment
- Business hours - Deployments only allowed during certain hours
- Template validation - Pipeline must extend from an approved template
- Invoke Azure Function - Run custom validation logic
- REST API check - Call an external endpoint to validate readiness
Here is a pipeline that uses environment approvals across stages:
trigger:
branches:
include:
- main
stages:
- stage: Build
jobs:
- job: BuildAndTest
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool@0
inputs:
versionSpec: '18.x'
- script: |
npm ci
npm run build
npm test
displayName: 'Build and Test'
- publish: $(System.DefaultWorkingDirectory)/dist
artifact: app-artifact
- stage: DeployQA
dependsOn: Build
jobs:
- deployment: DeployToQA
environment: 'QA-Integration'
strategy:
runOnce:
deploy:
steps:
- download: current
artifact: app-artifact
- script: |
echo "Deploying to QA environment"
npm run deploy:qa
- stage: DeployStaging
dependsOn: DeployQA
jobs:
- deployment: DeployToStaging
environment: 'Staging'
strategy:
runOnce:
deploy:
steps:
- download: current
artifact: app-artifact
- script: |
echo "Deploying to Staging environment"
npm run deploy:staging
- stage: DeployProduction
dependsOn: DeployStaging
jobs:
- deployment: DeployToProduction
environment: 'Production'
strategy:
canary:
increments: [10, 25, 50, 100]
deploy:
steps:
- download: current
artifact: app-artifact
- script: |
echo "Deploying to Production (canary)"
npm run deploy:production
Notice how each stage targets a different environment. When the pipeline reaches DeployStaging, it will pause and wait for the configured approvers to sign off. This is not optional. The pipeline physically cannot proceed without approval.
Kubernetes and VM Resource Types
When you attach Kubernetes resources to an environment, Azure DevOps gains visibility into what is actually running in your cluster. You can see pod status, deployment history, and even drill into workload details from the environment view.
To register a Kubernetes resource:
# In your pipeline, reference the environment with a Kubernetes resource
jobs:
- deployment: DeployToK8s
environment: 'QA-Integration.qa-namespace'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
inputs:
action: deploy
kubernetesServiceConnection: 'my-k8s-connection'
namespace: 'qa-namespace'
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yaml
$(Pipeline.Workspace)/manifests/service.yaml
The dot notation (QA-Integration.qa-namespace) links the Kubernetes namespace directly to the environment. For VM resources, you install the Azure DevOps agent on the target VM and register it with the environment. This gives you the same deployment tracking and approval gates for traditional VM-based infrastructure.
Deployment Strategies per Environment
Not every environment needs the same deployment strategy. In fact, using the same strategy everywhere is a missed opportunity. Here is what I recommend:
| Environment | Strategy | Reason |
|---|---|---|
| Dev | runOnce |
Speed matters, no users affected |
| QA | runOnce |
Fast feedback for testers |
| Staging | rolling |
Mimics production behavior |
| Production | canary or blue-green |
Minimize blast radius |
The rolling strategy is particularly useful for staging because it validates that your deployment process handles incremental updates correctly before you try it in production.
Environment Variables and Configuration
Configuration management across environments is where most teams stumble. The common mistake is scattering configuration across pipeline variables, Azure Key Vault, app settings, and config files with no single source of truth.
My approach is to use a configuration hierarchy. Here is a Node.js module that resolves configuration based on the current environment:
var https = require("https");
function ConfigManager(options) {
this.environment = options.environment || process.env.NODE_ENV || "development";
this.orgUrl = options.orgUrl;
this.project = options.project;
this.token = options.token;
this.cache = {};
this.cacheExpiry = {};
this.cacheTTL = options.cacheTTL || 300000; // 5 minutes
}
ConfigManager.prototype.getConfig = function (key) {
var self = this;
var cacheKey = self.environment + ":" + key;
if (self.cache[cacheKey] && Date.now() < self.cacheExpiry[cacheKey]) {
return Promise.resolve(self.cache[cacheKey]);
}
return self._fetchFromVariableGroup(key).then(function (value) {
if (value !== undefined) {
self.cache[cacheKey] = value;
self.cacheExpiry[cacheKey] = Date.now() + self.cacheTTL;
return value;
}
return self._fetchFromEnvironment(key);
}).then(function (value) {
if (value !== undefined) {
self.cache[cacheKey] = value;
self.cacheExpiry[cacheKey] = Date.now() + self.cacheTTL;
}
return value;
});
};
ConfigManager.prototype._fetchFromVariableGroup = function (key) {
var url = this.orgUrl + "/" + this.project +
"/_apis/distributedtask/variablegroups?groupName=" +
this.environment + "-config&api-version=7.1";
return this._makeRequest(url).then(function (data) {
if (data.count > 0 && data.value[0].variables[key]) {
return data.value[0].variables[key].value;
}
return undefined;
});
};
ConfigManager.prototype._fetchFromEnvironment = function (key) {
return Promise.resolve(process.env[key]);
};
ConfigManager.prototype._makeRequest = function (url) {
var self = this;
return new Promise(function (resolve, reject) {
var urlObj = new URL(url);
var options = {
hostname: urlObj.hostname,
path: urlObj.pathname + urlObj.search,
method: "GET",
headers: {
"Authorization": "Basic " + Buffer.from(":" + self.token).toString("base64"),
"Content-Type": "application/json"
}
};
var req = https.request(options, function (res) {
var body = "";
res.on("data", function (chunk) { body += chunk; });
res.on("end", function () {
try {
resolve(JSON.parse(body));
} catch (e) {
reject(new Error("Failed to parse response: " + body));
}
});
});
req.on("error", reject);
req.end();
});
};
module.exports = ConfigManager;
This gives you a layered configuration system. Variable groups in Azure DevOps take priority, falling back to process environment variables. The cache prevents hammering the API on every config lookup.
Linking Test Plans to Environments
One of the most powerful but underused features in Azure DevOps is linking test plans to specific environments. This lets you track which tests have been executed against which environment, which is critical for compliance and audit trails.
var azdev = require("azure-devops-node-api");
function linkTestPlanToEnvironment(projectName, testPlanId, environmentId) {
var authHandler = azdev.getPersonalAccessTokenHandler(process.env.AZURE_DEVOPS_PAT);
var connection = new azdev.WebApi(orgUrl, authHandler);
return connection.connect().then(function () {
return connection.rest.update(
orgUrl + "/" + projectName +
"/_apis/test/plans/" + testPlanId +
"?api-version=7.1",
{
buildEnvironmentId: environmentId,
description: "Linked to environment " + environmentId
}
);
}).then(function (response) {
console.log("Test plan linked to environment successfully");
return response.result;
});
}
When test runs execute against a linked environment, the results appear in the environment's deployment history. This creates a complete audit trail showing what was deployed, what was tested, and whether it passed.
Environment Health Monitoring
Deploying to an environment that is already unhealthy is a waste of everyone's time. Health checks should run before deployment begins and again after deployment completes. Here is a health check module:
var http = require("http");
var https = require("https");
function HealthChecker(options) {
this.endpoints = options.endpoints || [];
this.timeout = options.timeout || 10000;
this.retries = options.retries || 3;
this.retryDelay = options.retryDelay || 2000;
}
HealthChecker.prototype.checkAll = function () {
var self = this;
var promises = self.endpoints.map(function (endpoint) {
return self._checkEndpoint(endpoint);
});
return Promise.all(promises).then(function (results) {
var healthy = results.every(function (r) { return r.healthy; });
return {
healthy: healthy,
timestamp: new Date().toISOString(),
checks: results
};
});
};
HealthChecker.prototype._checkEndpoint = function (endpoint, attempt) {
var self = this;
attempt = attempt || 1;
return new Promise(function (resolve) {
var protocol = endpoint.url.startsWith("https") ? https : http;
var startTime = Date.now();
var req = protocol.get(endpoint.url, { timeout: self.timeout }, function (res) {
var duration = Date.now() - startTime;
var body = "";
res.on("data", function (chunk) { body += chunk; });
res.on("end", function () {
var healthy = res.statusCode >= 200 && res.statusCode < 300;
if (endpoint.expectedBody) {
healthy = healthy && body.indexOf(endpoint.expectedBody) !== -1;
}
if (!healthy && attempt < self.retries) {
setTimeout(function () {
self._checkEndpoint(endpoint, attempt + 1).then(resolve);
}, self.retryDelay);
return;
}
resolve({
name: endpoint.name,
url: endpoint.url,
healthy: healthy,
statusCode: res.statusCode,
responseTime: duration,
attempt: attempt
});
});
});
req.on("error", function (err) {
if (attempt < self.retries) {
setTimeout(function () {
self._checkEndpoint(endpoint, attempt + 1).then(resolve);
}, self.retryDelay);
return;
}
resolve({
name: endpoint.name,
url: endpoint.url,
healthy: false,
error: err.message,
attempt: attempt
});
});
req.on("timeout", function () {
req.destroy();
});
});
};
module.exports = HealthChecker;
Integrate this into your pipeline as a pre-deployment gate:
steps:
- script: |
node -e "
var HealthChecker = require('./tools/health-checker');
var checker = new HealthChecker({
endpoints: [
{ name: 'API', url: '$(API_URL)/health' },
{ name: 'Database', url: '$(API_URL)/health/db' },
{ name: 'Cache', url: '$(API_URL)/health/cache' }
]
});
checker.checkAll().then(function(result) {
if (!result.healthy) {
console.error('Environment unhealthy:', JSON.stringify(result, null, 2));
process.exit(1);
}
console.log('All health checks passed');
});
"
displayName: 'Pre-deployment Health Check'
Configuration Management Across Environments
The configuration hierarchy I recommend has four layers, in order of precedence:
- Secrets (Azure Key Vault) - Database passwords, API keys, certificates
- Environment-specific (Variable Groups) - URLs, feature flags, resource limits
- Shared defaults (Config files) - Timeouts, retry policies, log formats
- Application defaults (Code) - Fallback values baked into the application
Here is how to wire this up in a Node.js application:
var fs = require("fs");
var path = require("path");
function loadConfig(environment) {
var defaults = {
port: 3000,
logLevel: "info",
requestTimeout: 30000,
maxRetries: 3,
cacheEnabled: true,
cacheTTL: 600
};
var sharedConfigPath = path.join(__dirname, "config", "shared.json");
var envConfigPath = path.join(__dirname, "config", environment + ".json");
var shared = {};
var envSpecific = {};
if (fs.existsSync(sharedConfigPath)) {
shared = JSON.parse(fs.readFileSync(sharedConfigPath, "utf8"));
}
if (fs.existsSync(envConfigPath)) {
envSpecific = JSON.parse(fs.readFileSync(envConfigPath, "utf8"));
}
var config = Object.assign({}, defaults, shared, envSpecific);
// Environment variables override everything except secrets
Object.keys(process.env).forEach(function (key) {
if (key.startsWith("APP_")) {
var configKey = key.replace("APP_", "").toLowerCase().replace(/_([a-z])/g, function (m, c) {
return c.toUpperCase();
});
config[configKey] = process.env[key];
}
});
return config;
}
module.exports = loadConfig;
Feature Flags for Test Environments
Feature flags are essential for test environments. They let QA test new features in isolation without waiting for the entire release to be ready. Here is a minimal feature flag system backed by Azure DevOps variable groups:
function FeatureFlagManager(configManager) {
this.configManager = configManager;
this.flags = {};
this.loaded = false;
}
FeatureFlagManager.prototype.load = function () {
var self = this;
return self.configManager.getConfig("FEATURE_FLAGS").then(function (flagsJson) {
try {
self.flags = JSON.parse(flagsJson || "{}");
} catch (e) {
console.error("Failed to parse feature flags:", e.message);
self.flags = {};
}
self.loaded = true;
return self.flags;
});
};
FeatureFlagManager.prototype.isEnabled = function (flagName) {
if (!this.loaded) {
throw new Error("Feature flags not loaded. Call load() first.");
}
return this.flags[flagName] === true;
};
FeatureFlagManager.prototype.getVariant = function (flagName) {
if (!this.loaded) {
throw new Error("Feature flags not loaded. Call load() first.");
}
return this.flags[flagName];
};
module.exports = FeatureFlagManager;
In your test environment, you might enable an experimental search algorithm:
var flags = new FeatureFlagManager(configManager);
flags.load().then(function () {
if (flags.isEnabled("experimental-search")) {
app.use("/api/search", require("./routes/search-v2"));
} else {
app.use("/api/search", require("./routes/search-v1"));
}
});
Database Management Across Environments
Every test environment needs its own database, and those databases need to stay in sync with production's schema while containing appropriate test data. Here is a database migration coordinator:
var childProcess = require("child_process");
function DatabaseManager(options) {
this.environment = options.environment;
this.connectionString = options.connectionString;
this.migrationsDir = options.migrationsDir || "./migrations";
this.seedDir = options.seedDir || "./seeds";
}
DatabaseManager.prototype.migrate = function () {
var self = this;
console.log("Running migrations for " + self.environment);
return new Promise(function (resolve, reject) {
var proc = childProcess.spawn("npx", [
"db-migrate", "up",
"--config", "./database.json",
"--env", self.environment
], {
stdio: "inherit",
env: Object.assign({}, process.env, {
DATABASE_URL: self.connectionString
})
});
proc.on("close", function (code) {
if (code === 0) {
console.log("Migrations completed successfully");
resolve();
} else {
reject(new Error("Migration failed with exit code " + code));
}
});
});
};
DatabaseManager.prototype.seed = function () {
var self = this;
if (self.environment === "production") {
console.log("Refusing to seed production database");
return Promise.reject(new Error("Cannot seed production"));
}
console.log("Seeding " + self.environment + " database");
return new Promise(function (resolve, reject) {
var proc = childProcess.spawn("npx", [
"db-migrate", "seed:run",
"--config", "./database.json",
"--env", self.environment
], {
stdio: "inherit",
env: Object.assign({}, process.env, {
DATABASE_URL: self.connectionString
})
});
proc.on("close", function (code) {
if (code === 0) {
console.log("Seeding completed successfully");
resolve();
} else {
reject(new Error("Seeding failed with exit code " + code));
}
});
});
};
DatabaseManager.prototype.reset = function () {
var self = this;
if (self.environment === "production" || self.environment === "staging") {
return Promise.reject(new Error("Cannot reset " + self.environment + " database"));
}
return self.migrate().then(function () {
return self.seed();
});
};
module.exports = DatabaseManager;
The safety rails here are important. The seed method refuses to run against production, and reset will not touch staging or production. I have seen teams accidentally wipe production data because they did not have these guards. Do not be that team.
Environment Provisioning Automation with Node.js
For ephemeral test environments, like spinning up a fresh environment per pull request, you need automation that handles the full lifecycle. Here is a provisioning module that creates an environment, configures it, runs health checks, and tears it down:
var azdev = require("azure-devops-node-api");
function EnvironmentProvisioner(options) {
this.orgUrl = options.orgUrl;
this.project = options.project;
this.token = options.token;
this.authHandler = azdev.getPersonalAccessTokenHandler(options.token);
this.connection = new azdev.WebApi(options.orgUrl, this.authHandler);
}
EnvironmentProvisioner.prototype.provision = function (envConfig) {
var self = this;
var envId;
console.log("Provisioning environment: " + envConfig.name);
return self.connection.connect()
.then(function () {
return self._createEnvironment(envConfig);
})
.then(function (env) {
envId = env.id;
console.log("Environment created with ID: " + envId);
return self._configureApprovals(envId, envConfig.approvals || []);
})
.then(function () {
return self._setVariables(envConfig.name, envConfig.variables || {});
})
.then(function () {
console.log("Environment provisioned successfully: " + envConfig.name);
return { id: envId, name: envConfig.name, status: "provisioned" };
})
.catch(function (err) {
console.error("Failed to provision environment:", err.message);
if (envId) {
return self.deprovision(envId).then(function () {
throw err;
});
}
throw err;
});
};
EnvironmentProvisioner.prototype._createEnvironment = function (envConfig) {
return this.connection.rest.create(
this.orgUrl + "/" + this.project +
"/_apis/distributedtask/environments?api-version=7.1",
{
name: envConfig.name,
description: envConfig.description || "Auto-provisioned environment"
}
).then(function (response) {
return response.result;
});
};
EnvironmentProvisioner.prototype._configureApprovals = function (envId, approvers) {
if (approvers.length === 0) {
return Promise.resolve();
}
var approvalConfig = {
type: { id: 8, name: "Approval" },
settings: {
approvers: approvers.map(function (email) {
return { uniqueName: email };
}),
executionOrder: 1,
minRequiredApprovers: 1,
instructions: "Please review the deployment"
}
};
return this.connection.rest.create(
this.orgUrl + "/" + this.project +
"/_apis/pipelines/checks/configurations?api-version=7.1-preview.1",
Object.assign({}, approvalConfig, {
resource: {
type: "environment",
id: String(envId)
}
})
);
};
EnvironmentProvisioner.prototype._setVariables = function (envName, variables) {
var variableGroup = {
name: envName + "-config",
description: "Configuration for " + envName,
variables: {}
};
Object.keys(variables).forEach(function (key) {
variableGroup.variables[key] = { value: variables[key] };
});
return this.connection.rest.create(
this.orgUrl + "/" + this.project +
"/_apis/distributedtask/variablegroups?api-version=7.1",
variableGroup
);
};
EnvironmentProvisioner.prototype.deprovision = function (envId) {
var self = this;
console.log("Deprovisioning environment: " + envId);
return self.connection.rest.del(
self.orgUrl + "/" + self.project +
"/_apis/distributedtask/environments/" + envId + "?api-version=7.1"
).then(function () {
console.log("Environment deprovisioned: " + envId);
});
};
module.exports = EnvironmentProvisioner;
Cleanup and Lifecycle Management
Ephemeral environments that are never cleaned up will drain your infrastructure budget and create confusion. Here is a cleanup scheduler that identifies stale environments and removes them:
function EnvironmentLifecycleManager(options) {
this.provisioner = options.provisioner;
this.maxAgeHours = options.maxAgeHours || 72;
this.protectedEnvironments = options.protectedEnvironments || [
"Development", "QA-Integration", "Staging", "Production"
];
}
EnvironmentLifecycleManager.prototype.cleanupStale = function () {
var self = this;
return self._listEnvironments().then(function (environments) {
var now = Date.now();
var stale = environments.filter(function (env) {
if (self.protectedEnvironments.indexOf(env.name) !== -1) {
return false;
}
var created = new Date(env.createdOn).getTime();
var ageHours = (now - created) / (1000 * 60 * 60);
return ageHours > self.maxAgeHours;
});
console.log("Found " + stale.length + " stale environments");
var chain = Promise.resolve();
stale.forEach(function (env) {
chain = chain.then(function () {
console.log("Cleaning up: " + env.name + " (ID: " + env.id + ")");
return self.provisioner.deprovision(env.id);
});
});
return chain.then(function () {
return { cleaned: stale.length };
});
});
};
EnvironmentLifecycleManager.prototype._listEnvironments = function () {
return this.provisioner.connection.rest.get(
this.provisioner.orgUrl + "/" + this.provisioner.project +
"/_apis/distributedtask/environments?api-version=7.1"
).then(function (response) {
return response.result.value || [];
});
};
module.exports = EnvironmentLifecycleManager;
Schedule this to run daily through a pipeline or a cron job. The protectedEnvironments list ensures you never accidentally delete your permanent environments.
Complete Working Example
Here is a complete Node.js tool that ties everything together. It provisions test environments, manages configurations, runs health checks, and coordinates test execution:
var EnvironmentProvisioner = require("./lib/provisioner");
var HealthChecker = require("./lib/health-checker");
var ConfigManager = require("./lib/config-manager");
var DatabaseManager = require("./lib/database-manager");
var FeatureFlagManager = require("./lib/feature-flags");
var EnvironmentLifecycleManager = require("./lib/lifecycle");
var orgUrl = "https://dev.azure.com/your-organization";
var project = "MyProject";
var token = process.env.AZURE_DEVOPS_PAT;
var provisioner = new EnvironmentProvisioner({
orgUrl: orgUrl,
project: project,
token: token
});
var lifecycle = new EnvironmentLifecycleManager({
provisioner: provisioner,
maxAgeHours: 48,
protectedEnvironments: ["Development", "QA-Integration", "Staging", "Production"]
});
function runTestEnvironmentPipeline(branchName) {
var envName = "PR-" + branchName.replace(/[^a-zA-Z0-9]/g, "-").substring(0, 40);
var envId;
var configManager;
var healthChecker;
var dbManager;
console.log("=== Starting Test Environment Pipeline ===");
console.log("Branch: " + branchName);
console.log("Environment: " + envName);
console.log("");
// Step 1: Provision environment
return provisioner.provision({
name: envName,
description: "Test environment for branch: " + branchName,
approvals: [],
variables: {
API_URL: "https://" + envName.toLowerCase() + ".test.example.com",
LOG_LEVEL: "debug",
NODE_ENV: "test",
BRANCH: branchName
}
})
.then(function (env) {
envId = env.id;
console.log("[1/6] Environment provisioned: " + envName);
// Step 2: Configure settings
configManager = new ConfigManager({
environment: envName,
orgUrl: orgUrl,
project: project,
token: token
});
return configManager.getConfig("API_URL");
})
.then(function (apiUrl) {
console.log("[2/6] Configuration loaded. API URL: " + apiUrl);
// Step 3: Run database migrations
dbManager = new DatabaseManager({
environment: envName,
connectionString: process.env.TEST_DATABASE_URL ||
"postgresql://localhost:5432/test_" + envName.toLowerCase().replace(/-/g, "_"),
migrationsDir: "./migrations"
});
return dbManager.reset();
})
.then(function () {
console.log("[3/6] Database migrated and seeded");
// Step 4: Load feature flags
var flagManager = new FeatureFlagManager(configManager);
return flagManager.load();
})
.then(function (flags) {
console.log("[4/6] Feature flags loaded:", JSON.stringify(flags));
// Step 5: Health checks
healthChecker = new HealthChecker({
endpoints: [
{
name: "API",
url: "https://" + envName.toLowerCase() + ".test.example.com/health"
},
{
name: "Database",
url: "https://" + envName.toLowerCase() + ".test.example.com/health/db"
}
],
timeout: 15000,
retries: 5,
retryDelay: 3000
});
return healthChecker.checkAll();
})
.then(function (healthResult) {
console.log("[5/6] Health check result:", healthResult.healthy ? "HEALTHY" : "UNHEALTHY");
if (!healthResult.healthy) {
var unhealthy = healthResult.checks.filter(function (c) { return !c.healthy; });
unhealthy.forEach(function (check) {
console.error(" FAILED: " + check.name + " - " + (check.error || "Status " + check.statusCode));
});
throw new Error("Environment failed health checks");
}
// Step 6: Report ready
console.log("[6/6] Environment ready for testing");
return {
environmentId: envId,
environmentName: envName,
status: "ready",
apiUrl: "https://" + envName.toLowerCase() + ".test.example.com"
};
})
.catch(function (err) {
console.error("Pipeline failed:", err.message);
if (envId) {
console.log("Cleaning up failed environment...");
return provisioner.deprovision(envId).then(function () {
throw err;
});
}
throw err;
});
}
// Entry point
var branch = process.argv[2] || "feature/my-feature";
runTestEnvironmentPipeline(branch)
.then(function (result) {
console.log("");
console.log("=== Environment Ready ===");
console.log("ID: " + result.environmentId);
console.log("Name: " + result.environmentName);
console.log("URL: " + result.apiUrl);
console.log("");
console.log("Run cleanup when done:");
console.log(" node cleanup.js " + result.environmentId);
})
.catch(function (err) {
console.error("Fatal error:", err.message);
process.exit(1);
});
// Also run cleanup of stale environments
lifecycle.cleanupStale().then(function (result) {
if (result.cleaned > 0) {
console.log("Cleaned up " + result.cleaned + " stale environments");
}
});
Run it with:
node provision-test-env.js feature/payment-gateway
The output will walk through each step, provisioning the environment, configuring variables, running migrations, checking health, and reporting the final URL. If any step fails, the environment is automatically cleaned up.
Common Issues and Troubleshooting
1. Environment approvals stuck in pending state
This happens when the approver has left the organization or changed roles. Azure DevOps does not automatically reassign approvals. Check the environment settings and update the approver list. For automated pipelines, use group-based approvals instead of individual accounts so the pool of approvers stays current.
2. Variable group conflicts between environments
When two environments share a variable group name or when a pipeline references the wrong group, you get configuration bleed. Always use a strict naming convention like {environment}-{project}-config. Never share variable groups between environments unless the values are truly identical.
3. Health checks passing but deployments failing
The health check endpoint might return 200 OK even when the application is partially broken. Make your health check thorough. It should verify database connectivity, cache availability, and external service reachability. A shallow health check that just returns "ok" is worse than no health check at all because it gives false confidence.
4. Ephemeral environments accumulating without cleanup
If your cleanup job fails silently or the environment naming convention changes, you end up with dozens of orphaned environments. Set up alerts for the total environment count. If it exceeds your expected maximum, something is wrong. Also tag environments with a TTL in their description so it is obvious which ones are stale.
5. Database migration drift between environments
When a migration succeeds in dev but fails in QA because the schemas have diverged, the root cause is usually someone running manual SQL against QA. Enforce a strict policy: all schema changes go through migration files. Lock down direct SQL access on non-dev environments.
Best Practices
Name environments consistently. Use a convention like
{stage}-{project}orPR-{branch}for ephemeral environments. Inconsistent naming leads to configuration errors and missed cleanups.Never share secrets between environments. Each environment should have its own database credentials, API keys, and certificates. Sharing secrets means a breach in dev compromises production.
Use approvals on staging and production, but not on dev. Approvals on dev environments slow down developers without adding value. Save the gates for environments that face users.
Implement health checks at every tier. Check the application, database, cache, and external dependencies. A deployment to an unhealthy environment wastes time and creates confusing test results.
Automate environment provisioning and teardown. Manual environment creation does not scale. Every pull request should be able to get its own environment automatically, and that environment should be destroyed when the PR closes.
Version your configuration alongside your code. Store environment-specific config files in the same repository as the application. This ensures that configuration changes go through the same review process as code changes.
Monitor environment costs. Cloud resources for test environments can quietly become your largest infrastructure expense. Set budgets, use auto-shutdown, and clean up aggressively.
Test your deployment pipeline in lower environments first. The pipeline itself is code. If you change the deployment process, that change should flow through dev and QA before reaching production.