Zero Trust Architecture for Azure DevOps
Complete guide to implementing Zero Trust architecture in Azure DevOps, covering identity verification, least privilege access, micro-segmentation, continuous validation, network controls, device compliance, and building a comprehensive Zero Trust security model for DevOps pipelines.
Zero Trust Architecture for Azure DevOps
Overview
Zero Trust means "never trust, always verify." In the context of Azure DevOps, that means every identity, every device, every network request, and every pipeline execution is treated as potentially compromised until proven otherwise. Traditional security assumed that anything inside the corporate network was trusted — and that assumption has been responsible for most breaches I have witnessed. Implementing Zero Trust in Azure DevOps is not a product you buy; it is a set of principles you apply to every layer of your development platform.
Prerequisites
- Azure DevOps organization connected to Azure Active Directory
- Azure AD P1 or P2 license for Conditional Access policies
- Microsoft Intune or another MDM solution for device compliance
- Azure subscription for network controls and private endpoints
- Project Collection Administrator permissions in Azure DevOps
- Global Administrator or Security Administrator role in Azure AD
- Understanding of Azure networking (VNets, Private Link, NSGs)
Zero Trust Principles for DevOps
The three core principles apply directly to DevOps:
1. Verify Explicitly
- Authenticate every user, service, and device
- Validate every pipeline run against policy
- Check every artifact before deployment
2. Use Least Privilege Access
- Grant minimum permissions needed for each role
- Time-bound elevated access
- Separate permissions per environment
3. Assume Breach
- Segment environments and networks
- Monitor and log everything
- Automate incident response
Zero Trust Maturity Model for DevOps
// Zero Trust maturity assessment for Azure DevOps
var maturityLevels = {
identity: [
{ level: 1, name: "Basic", criteria: "Azure AD SSO, basic password policy" },
{ level: 2, name: "Advanced", criteria: "MFA enforced, conditional access, PAT restrictions" },
{ level: 3, name: "Optimal", criteria: "Passwordless, continuous access evaluation, risk-based auth" }
],
devices: [
{ level: 1, name: "Basic", criteria: "No device requirements" },
{ level: 2, name: "Advanced", criteria: "Compliant device required for elevated roles" },
{ level: 3, name: "Optimal", criteria: "All access requires compliant/managed device" }
],
network: [
{ level: 1, name: "Basic", criteria: "Public endpoints, IP-based restrictions" },
{ level: 2, name: "Advanced", criteria: "Private agents, VNet-hosted build agents" },
{ level: 3, name: "Optimal", criteria: "Private endpoints, no public exposure, micro-segmentation" }
],
pipelines: [
{ level: 1, name: "Basic", criteria: "Manual approvals, basic RBAC" },
{ level: 2, name: "Advanced", criteria: "Environment gates, security scanning, signed artifacts" },
{ level: 3, name: "Optimal", criteria: "Policy-as-code, attestation chain, immutable artifacts" }
],
data: [
{ level: 1, name: "Basic", criteria: "Secrets in variable groups, encryption at rest" },
{ level: 2, name: "Advanced", criteria: "Key Vault integration, automated rotation" },
{ level: 3, name: "Optimal", criteria: "Managed identities, just-in-time secrets, no persistent credentials" }
]
};
function assessMaturity(currentLevels) {
var score = 0;
var maxScore = 0;
var gaps = [];
Object.keys(maturityLevels).forEach(function(pillar) {
var current = currentLevels[pillar] || 1;
score += current;
maxScore += 3;
if (current < 3) {
var next = maturityLevels[pillar][current]; // Next level up
gaps.push({
pillar: pillar,
current: current,
next: next.level,
action: next.criteria
});
}
});
console.log("=== Zero Trust Maturity Assessment ===");
console.log("Overall Score: " + score + "/" + maxScore + " (" + Math.round(score / maxScore * 100) + "%)");
console.log("");
Object.keys(currentLevels).forEach(function(pillar) {
var level = currentLevels[pillar];
var def = maturityLevels[pillar][level - 1];
var bar = "=".repeat(level * 10) + " ".repeat((3 - level) * 10);
console.log(" " + pillar.padEnd(12) + " [" + bar + "] Level " + level + " - " + def.name);
});
if (gaps.length > 0) {
console.log("\nNext Steps:");
gaps.forEach(function(gap) {
console.log(" " + gap.pillar + ": " + gap.action);
});
}
return { score: score, maxScore: maxScore, gaps: gaps };
}
// Assess current state
assessMaturity({
identity: 2,
devices: 1,
network: 2,
pipelines: 2,
data: 2
});
Output:
=== Zero Trust Maturity Assessment ===
Overall Score: 9/15 (60%)
identity [==================== ] Level 2 - Advanced
devices [========== ] Level 1 - Basic
network [==================== ] Level 2 - Advanced
pipelines [==================== ] Level 2 - Advanced
data [==================== ] Level 2 - Advanced
Next Steps:
identity: Passwordless, continuous access evaluation, risk-based auth
devices: Compliant device required for elevated roles
network: Private endpoints, no public exposure, micro-segmentation
pipelines: Policy-as-code, attestation chain, immutable artifacts
data: Managed identities, just-in-time secrets, no persistent credentials
Identity Verification: Never Trust, Always Verify
Conditional Access Policies for Azure DevOps
var https = require("https");
// Verify all conditional access policies are correctly configured
// for Azure DevOps (App ID: 499b84ac-1321-427f-aa17-267ca6975798)
function auditConditionalAccessPolicies(tenantId, clientId, clientSecret) {
return getGraphToken(tenantId, clientId, clientSecret)
.then(function(token) {
return graphRequest(token, "/v1.0/identity/conditionalAccess/policies");
})
.then(function(response) {
var policies = response.value || [];
var adoPolicies = [];
var AZURE_DEVOPS_APP_ID = "499b84ac-1321-427f-aa17-267ca6975798";
policies.forEach(function(policy) {
var apps = policy.conditions.applications || {};
var includeApps = apps.includeApplications || [];
// Check if policy targets Azure DevOps
var targetsAdo = includeApps.indexOf(AZURE_DEVOPS_APP_ID) !== -1
|| includeApps.indexOf("All") !== -1;
if (targetsAdo) {
adoPolicies.push({
name: policy.displayName,
state: policy.state,
grantControls: policy.grantControls,
conditions: {
users: policy.conditions.users,
locations: policy.conditions.locations,
platforms: policy.conditions.platforms,
clientAppTypes: policy.conditions.clientAppTypes
}
});
}
});
console.log("Conditional Access Policies targeting Azure DevOps: " + adoPolicies.length);
adoPolicies.forEach(function(p) {
console.log(" " + p.name + " [" + p.state + "]");
if (p.grantControls) {
(p.grantControls.builtInControls || []).forEach(function(control) {
console.log(" Requires: " + control);
});
}
});
// Check for required policies
var requiredPolicies = [
{ name: "MFA", check: function(p) { return (p.grantControls.builtInControls || []).indexOf("mfa") !== -1; } },
{ name: "Compliant Device", check: function(p) { return (p.grantControls.builtInControls || []).indexOf("compliantDevice") !== -1; } }
];
requiredPolicies.forEach(function(req) {
var found = adoPolicies.some(function(p) {
return p.state === "enabled" && p.grantControls && req.check(p);
});
console.log(" [" + (found ? "OK" : "MISSING") + "] " + req.name + " policy");
});
return adoPolicies;
});
}
function getGraphToken(tenantId, clientId, clientSecret) {
var data = "grant_type=client_credentials"
+ "&client_id=" + clientId
+ "&client_secret=" + encodeURIComponent(clientSecret)
+ "&scope=https://graph.microsoft.com/.default";
return new Promise(function(resolve, reject) {
var req = https.request({
hostname: "login.microsoftonline.com",
path: "/" + tenantId + "/oauth2/v2.0/token",
method: "POST",
headers: { "Content-Type": "application/x-www-form-urlencoded", "Content-Length": Buffer.byteLength(data) }
}, function(res) {
var body = "";
res.on("data", function(chunk) { body += chunk; });
res.on("end", function() { resolve(JSON.parse(body).access_token); });
});
req.on("error", reject);
req.write(data);
req.end();
});
}
function graphRequest(token, path) {
return new Promise(function(resolve, reject) {
var req = https.request({
hostname: "graph.microsoft.com",
path: path,
method: "GET",
headers: { "Authorization": "Bearer " + token }
}, function(res) {
var body = "";
res.on("data", function(chunk) { body += chunk; });
res.on("end", function() { resolve(JSON.parse(body)); });
});
req.on("error", reject);
req.end();
});
}
Continuous Access Evaluation
Traditional token-based authentication checks credentials once. Continuous Access Evaluation (CAE) re-evaluates in near real-time:
// Monitor for CAE events that affect Azure DevOps sessions
function monitorCaeEvents(tenantId, clientId, clientSecret) {
return getGraphToken(tenantId, clientId, clientSecret)
.then(function(token) {
// Query sign-in logs for CAE revocation events
var filter = "appId eq '499b84ac-1321-427f-aa17-267ca6975798'"
+ " and status/errorCode ne 0"
+ " and createdDateTime ge " + new Date(Date.now() - 86400000).toISOString();
return graphRequest(token, "/v1.0/auditLogs/signIns?$filter=" + encodeURIComponent(filter) + "&$top=50");
})
.then(function(response) {
var events = response.value || [];
var caeEvents = events.filter(function(e) {
return e.status && e.status.additionalDetails &&
e.status.additionalDetails.indexOf("CAE") !== -1;
});
console.log("CAE events in last 24 hours: " + caeEvents.length);
caeEvents.forEach(function(e) {
console.log(" User: " + e.userDisplayName);
console.log(" Reason: " + e.status.failureReason);
console.log(" Time: " + e.createdDateTime);
console.log(" IP: " + e.ipAddress);
});
return caeEvents;
});
}
Least Privilege: Permission Micro-Segmentation
Environment-Scoped Permissions
# azure-pipelines.yml - Environment-based access control
stages:
- stage: DeployDev
jobs:
- deployment: Deploy
environment: 'development'
# No approvals - developers deploy freely
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to dev"
- stage: DeployStaging
dependsOn: DeployDev
jobs:
- deployment: Deploy
environment: 'staging'
# Requires 1 approval from senior developers
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to staging"
- stage: DeployProduction
dependsOn: DeployStaging
jobs:
- deployment: Deploy
environment: 'production'
# Requires 2 approvals + business hours + security scan pass
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to production"
Just-In-Time Access with Azure PIM
var https = require("https");
// Request elevated access for emergency production operations
// Uses Azure AD Privileged Identity Management (PIM)
function requestElevatedAccess(tenantId, userId, roleDefinitionId, justification) {
return getGraphToken(tenantId, process.env.AZURE_CLIENT_ID, process.env.AZURE_CLIENT_SECRET)
.then(function(token) {
var requestData = JSON.stringify({
principalId: userId,
roleDefinitionId: roleDefinitionId,
directoryScopeId: "/",
action: "SelfActivate",
justification: justification,
scheduleInfo: {
startDateTime: new Date().toISOString(),
expiration: {
type: "AfterDuration",
duration: "PT4H" // 4-hour window
}
},
ticketInfo: {
ticketNumber: "INC-" + Date.now(),
ticketSystem: "AzureDevOps"
}
});
return graphRequest(token, "/v1.0/roleManagement/directory/roleAssignmentScheduleRequests", "POST", requestData);
})
.then(function(response) {
console.log("Elevated access granted:");
console.log(" Role: " + response.roleDefinitionId);
console.log(" Duration: 4 hours");
console.log(" Justification: " + justification);
console.log(" Expires: " + new Date(Date.now() + 4 * 3600000).toISOString());
return response;
});
}
Service Connection Scoping
var https = require("https");
// Audit service connection access across projects
// Zero Trust: each project should have its own scoped connections
function auditServiceConnections(organization, pat) {
var options = {
hostname: "dev.azure.com",
path: "/" + organization + "/_apis/serviceendpoint/endpoints?api-version=7.1",
method: "GET",
headers: {
"Authorization": "Basic " + Buffer.from(":" + pat).toString("base64")
}
};
return new Promise(function(resolve, reject) {
var req = https.request(options, function(res) {
var body = "";
res.on("data", function(chunk) { body += chunk; });
res.on("end", function() {
var data = JSON.parse(body);
resolve(data.value || []);
});
});
req.on("error", reject);
req.end();
}).then(function(endpoints) {
console.log("=== Service Connection Audit ===");
console.log("Total connections: " + endpoints.length);
var issues = [];
endpoints.forEach(function(ep) {
var projectRefs = ep.serviceEndpointProjectReferences || [];
var isShared = projectRefs.length > 1;
var hasAllPipelines = ep.data && ep.data.pipelineAuth === "true";
console.log("\n " + ep.name + " (" + ep.type + ")");
console.log(" Projects: " + projectRefs.length);
console.log(" All pipelines access: " + (hasAllPipelines ? "YES" : "no"));
if (isShared) {
issues.push({
connection: ep.name,
issue: "Shared across " + projectRefs.length + " projects",
recommendation: "Create separate connections per project"
});
}
if (hasAllPipelines) {
issues.push({
connection: ep.name,
issue: "Accessible by all pipelines",
recommendation: "Restrict to specific pipelines"
});
}
});
if (issues.length > 0) {
console.log("\n=== Zero Trust Violations ===");
issues.forEach(function(issue) {
console.log(" [ISSUE] " + issue.connection + ": " + issue.issue);
console.log(" Fix: " + issue.recommendation);
});
}
return { endpoints: endpoints, issues: issues };
});
}
Network Micro-Segmentation
Private Build Agents in VNet
# Deploy self-hosted agents in a VNet with private connectivity
# Create VNet and subnets
az network vnet create \
--name devops-vnet \
--resource-group devops-agents-rg \
--address-prefix 10.0.0.0/16
az network vnet subnet create \
--vnet-name devops-vnet \
--name build-agents-subnet \
--resource-group devops-agents-rg \
--address-prefix 10.0.1.0/24 \
--service-endpoints Microsoft.KeyVault Microsoft.Sql Microsoft.Storage
az network vnet subnet create \
--vnet-name devops-vnet \
--name deployment-agents-subnet \
--resource-group devops-agents-rg \
--address-prefix 10.0.2.0/24 \
--service-endpoints Microsoft.KeyVault Microsoft.Sql Microsoft.Storage Microsoft.Web
# Create NSG rules - only allow required outbound traffic
az network nsg create \
--name build-agents-nsg \
--resource-group devops-agents-rg
# Allow Azure DevOps communication
az network nsg rule create \
--nsg-name build-agents-nsg \
--resource-group devops-agents-rg \
--name allow-azure-devops \
--priority 100 \
--direction Outbound \
--access Allow \
--protocol Tcp \
--destination-port-ranges 443 \
--destination-address-prefixes AzureCloud
# Block all other outbound internet traffic
az network nsg rule create \
--nsg-name build-agents-nsg \
--resource-group devops-agents-rg \
--name deny-internet \
--priority 4096 \
--direction Outbound \
--access Deny \
--protocol '*' \
--destination-port-ranges '*' \
--destination-address-prefixes Internet
Private Endpoints for Azure Services
# Create private endpoint for Azure Container Registry
az network private-endpoint create \
--name acr-private-endpoint \
--resource-group devops-agents-rg \
--vnet-name devops-vnet \
--subnet build-agents-subnet \
--private-connection-resource-id "/subscriptions/{sub}/resourceGroups/{rg}/providers/Microsoft.ContainerRegistry/registries/myregistry" \
--group-id registry \
--connection-name acr-connection
# Create private endpoint for Key Vault
az network private-endpoint create \
--name kv-private-endpoint \
--resource-group devops-agents-rg \
--vnet-name devops-vnet \
--subnet build-agents-subnet \
--private-connection-resource-id "/subscriptions/{sub}/resourceGroups/{rg}/providers/Microsoft.KeyVault/vaults/myapp-keyvault" \
--group-id vault \
--connection-name kv-connection
# Create private DNS zones for name resolution
az network private-dns zone create \
--resource-group devops-agents-rg \
--name privatelink.azurecr.io
az network private-dns zone create \
--resource-group devops-agents-rg \
--name privatelink.vaultcore.azure.net
Pipeline Security: Continuous Validation
Pipeline Policy-as-Code
var fs = require("fs");
var yaml = require("js-yaml");
// Pipeline policy validator
// Enforces Zero Trust controls in pipeline definitions
var PIPELINE_POLICIES = [
{
name: "require-checkout-clean",
description: "Pipeline must use clean checkout",
check: function(pipeline) {
var steps = getAllSteps(pipeline);
var checkoutSteps = steps.filter(function(s) { return s.checkout; });
return checkoutSteps.length === 0 || checkoutSteps.every(function(s) { return s.checkout.clean === true; });
}
},
{
name: "no-inline-scripts-in-production",
description: "Production stages must not use inline scripts (use templates)",
check: function(pipeline) {
var prodStages = (pipeline.stages || []).filter(function(s) {
return s.stage && s.stage.toLowerCase().indexOf("prod") !== -1;
});
return prodStages.every(function(stage) {
var steps = getAllStepsFromStage(stage);
return !steps.some(function(s) { return s.script || s.bash || s.powershell; });
});
}
},
{
name: "require-security-scan-before-deploy",
description: "Security scanning stage must precede deployment",
check: function(pipeline) {
var stages = (pipeline.stages || []).map(function(s) { return s.stage; });
var hasSecurityStage = stages.some(function(s) { return /security|scan|sast|dast/i.test(s); });
var hasDeployStage = stages.some(function(s) { return /deploy|release/i.test(s); });
if (!hasDeployStage) return true;
return hasSecurityStage;
}
},
{
name: "require-environment-approvals",
description: "Production deployments must use environment with approvals",
check: function(pipeline) {
var prodJobs = [];
(pipeline.stages || []).forEach(function(stage) {
if (stage.stage && stage.stage.toLowerCase().indexOf("prod") !== -1) {
(stage.jobs || []).forEach(function(job) {
if (job.deployment) {
prodJobs.push(job);
}
});
}
});
return prodJobs.every(function(job) {
return job.environment !== undefined;
});
}
},
{
name: "no-wildcard-triggers",
description: "Pipeline must not trigger on all branches",
check: function(pipeline) {
var trigger = pipeline.trigger;
if (trigger === true || trigger === "none") return trigger === "none" || true;
if (Array.isArray(trigger) && trigger.indexOf("*") !== -1) return false;
if (trigger && trigger.branches && trigger.branches.include) {
return trigger.branches.include.indexOf("*") === -1;
}
return true;
}
}
];
function getAllSteps(pipeline) {
var steps = [];
(pipeline.stages || []).forEach(function(stage) {
steps = steps.concat(getAllStepsFromStage(stage));
});
return steps;
}
function getAllStepsFromStage(stage) {
var steps = [];
(stage.jobs || []).forEach(function(job) {
(job.steps || []).forEach(function(step) {
steps.push(step);
});
});
return steps;
}
function validatePipeline(pipelineFile) {
var content = fs.readFileSync(pipelineFile, "utf8");
var pipeline = yaml.load(content);
console.log("=== Pipeline Policy Validation ===");
console.log("File: " + pipelineFile);
console.log("");
var passed = 0;
var failed = 0;
PIPELINE_POLICIES.forEach(function(policy) {
var result;
try {
result = policy.check(pipeline);
} catch (e) {
result = false;
}
var status = result ? "[PASS]" : "[FAIL]";
console.log(" " + status + " " + policy.name);
if (!result) {
console.log(" " + policy.description);
failed++;
} else {
passed++;
}
});
console.log("\nResults: " + passed + " passed, " + failed + " failed");
if (failed > 0) {
console.log("##vso[task.complete result=Failed;]Pipeline policy validation failed");
process.exit(1);
}
}
validatePipeline(process.argv[2] || "azure-pipelines.yml");
Output:
=== Pipeline Policy Validation ===
File: azure-pipelines.yml
[PASS] require-checkout-clean
[FAIL] no-inline-scripts-in-production
Production stages must not use inline scripts (use templates)
[PASS] require-security-scan-before-deploy
[PASS] require-environment-approvals
[PASS] no-wildcard-triggers
Results: 4 passed, 1 failed
Complete Working Example: Zero Trust Compliance Checker
var https = require("https");
var fs = require("fs");
// ============================================================
// Zero Trust Compliance Checker for Azure DevOps
// Validates all Zero Trust controls are in place
// ============================================================
var config = {
organization: process.env.ADO_ORG,
pat: process.env.ADO_PAT,
tenantId: process.env.AZURE_TENANT_ID,
clientId: process.env.AZURE_CLIENT_ID,
clientSecret: process.env.AZURE_CLIENT_SECRET
};
var checks = [];
function addCheck(pillar, name, status, details) {
checks.push({
pillar: pillar,
name: name,
status: status, // pass, fail, warn
details: details
});
}
// Check 1: Organization policies
function checkOrgPolicies() {
console.log("\n[Identity] Checking organization policies...");
return adoRequest("/_apis/organizationpolicy/policies?api-version=7.1-preview.1")
.then(function(policies) {
// Check if third-party OAuth is restricted
var oauthPolicy = (policies.value || []).find(function(p) {
return p.policy && p.policy.name === "Policy.DisallowOAuthAuthentication";
});
var oauthEnabled = oauthPolicy && oauthPolicy.policy.effectiveValue === true;
addCheck("identity", "Third-party OAuth restricted", oauthEnabled ? "pass" : "warn",
oauthEnabled ? "OAuth disabled" : "Third-party OAuth apps allowed");
// Check PAT conditional access enforcement
var patPolicy = (policies.value || []).find(function(p) {
return p.policy && p.policy.name === "Policy.EnforceAADConditionalAccess";
});
var patCaEnabled = patPolicy && patPolicy.policy.effectiveValue === true;
addCheck("identity", "PAT conditional access enforcement", patCaEnabled ? "pass" : "fail",
patCaEnabled ? "Enabled" : "PATs bypass conditional access");
return policies;
})
.catch(function() {
addCheck("identity", "Organization policies", "warn", "Could not read policies");
});
}
// Check 2: Service connections
function checkServiceConnections() {
console.log("[Network] Checking service connections...");
return adoRequest("/_apis/serviceendpoint/endpoints?api-version=7.1")
.then(function(data) {
var endpoints = data.value || [];
var sharedCount = 0;
var allPipelinesCount = 0;
endpoints.forEach(function(ep) {
if ((ep.serviceEndpointProjectReferences || []).length > 1) sharedCount++;
if (ep.data && ep.data.pipelineAuth === "true") allPipelinesCount++;
});
addCheck("network", "No shared service connections",
sharedCount === 0 ? "pass" : "fail",
sharedCount + " of " + endpoints.length + " connections shared across projects");
addCheck("network", "Service connections restricted to specific pipelines",
allPipelinesCount === 0 ? "pass" : "warn",
allPipelinesCount + " connections accessible by all pipelines");
});
}
// Check 3: Branch policies
function checkBranchPolicies(projectId, repoId) {
console.log("[Pipelines] Checking branch policies...");
return adoRequest("/" + projectId + "/_apis/policy/configurations?api-version=7.1")
.then(function(data) {
var policies = data.value || [];
var hasMinReviewers = policies.some(function(p) {
return p.type && p.type.displayName === "Minimum number of reviewers" && p.isEnabled;
});
var hasBuildValidation = policies.some(function(p) {
return p.type && p.type.displayName === "Build" && p.isEnabled;
});
addCheck("pipelines", "Minimum reviewer policy on main",
hasMinReviewers ? "pass" : "fail",
hasMinReviewers ? "Enabled" : "No minimum reviewer requirement");
addCheck("pipelines", "Build validation on main",
hasBuildValidation ? "pass" : "fail",
hasBuildValidation ? "Enabled" : "No build validation required");
});
}
// Check 4: Audit logging
function checkAuditLogging() {
console.log("[Monitoring] Checking audit logging...");
return adoRequest("/_apis/audit/streams?api-version=7.1-preview.1")
.then(function(data) {
var streams = data.value || [];
var activeStreams = streams.filter(function(s) { return s.status === 1; });
addCheck("monitoring", "Audit log streaming configured",
activeStreams.length > 0 ? "pass" : "fail",
activeStreams.length + " active audit stream(s)");
})
.catch(function() {
addCheck("monitoring", "Audit log streaming", "warn", "Could not check audit streams");
});
}
function adoRequest(path) {
return new Promise(function(resolve, reject) {
var req = https.request({
hostname: "dev.azure.com",
path: "/" + config.organization + path,
method: "GET",
headers: {
"Authorization": "Basic " + Buffer.from(":" + config.pat).toString("base64")
}
}, function(res) {
var body = "";
res.on("data", function(chunk) { body += chunk; });
res.on("end", function() { resolve(JSON.parse(body)); });
});
req.on("error", reject);
req.end();
});
}
// Generate compliance report
function generateReport() {
console.log("\n=============================================");
console.log(" ZERO TRUST COMPLIANCE REPORT");
console.log(" Organization: " + config.organization);
console.log(" Date: " + new Date().toISOString());
console.log("=============================================\n");
var pillars = {};
checks.forEach(function(check) {
if (!pillars[check.pillar]) pillars[check.pillar] = [];
pillars[check.pillar].push(check);
});
var totalPass = 0;
var totalFail = 0;
var totalWarn = 0;
Object.keys(pillars).forEach(function(pillar) {
console.log(" " + pillar.toUpperCase());
pillars[pillar].forEach(function(check) {
var icon = check.status === "pass" ? "[OK]" : (check.status === "fail" ? "[XX]" : "[!!]");
console.log(" " + icon + " " + check.name);
console.log(" " + check.details);
if (check.status === "pass") totalPass++;
else if (check.status === "fail") totalFail++;
else totalWarn++;
});
console.log("");
});
var total = totalPass + totalFail + totalWarn;
var complianceScore = total > 0 ? Math.round(totalPass / total * 100) : 0;
console.log(" SUMMARY");
console.log(" Passed: " + totalPass);
console.log(" Failed: " + totalFail);
console.log(" Warnings: " + totalWarn);
console.log(" Compliance Score: " + complianceScore + "%");
console.log("\n=============================================");
var report = {
organization: config.organization,
timestamp: new Date().toISOString(),
checks: checks,
summary: { passed: totalPass, failed: totalFail, warnings: totalWarn, score: complianceScore }
};
fs.writeFileSync("zero-trust-report.json", JSON.stringify(report, null, 2));
if (totalFail > 0) {
console.log("\n##vso[task.logissue type=error]Zero Trust compliance: " + totalFail + " check(s) failed");
}
return report;
}
// Run all checks
checkOrgPolicies()
.then(function() { return checkServiceConnections(); })
.then(function() { return checkBranchPolicies("my-project", "my-repo"); })
.then(function() { return checkAuditLogging(); })
.then(function() { return generateReport(); })
.catch(function(err) {
console.error("Compliance check error: " + err.message);
process.exit(1);
});
Output:
=============================================
ZERO TRUST COMPLIANCE REPORT
Organization: my-org
Date: 2026-02-10T14:30:00.000Z
=============================================
IDENTITY
[!!] Third-party OAuth restricted
Third-party OAuth apps allowed
[XX] PAT conditional access enforcement
PATs bypass conditional access
NETWORK
[XX] No shared service connections
3 of 15 connections shared across projects
[!!] Service connections restricted to specific pipelines
5 connections accessible by all pipelines
PIPELINES
[OK] Minimum reviewer policy on main
Enabled
[OK] Build validation on main
Enabled
MONITORING
[OK] Audit log streaming configured
2 active audit stream(s)
SUMMARY
Passed: 3
Failed: 2
Warnings: 2
Compliance Score: 43%
=============================================
Common Issues & Troubleshooting
Conditional Access Blocks Automated Pipelines
Error: AADSTS53003: Access has been blocked by Conditional Access policies
Automated service principals used by pipelines can be blocked by conditional access targeting all users. Fix: Exclude the service principal from user-targeted policies and create a separate policy for service principals:
Policy: "Service Principal Controls"
Assignments:
Workload identities: Include pipeline service principals
Access controls:
Grant: Require specific conditions (IP range of build agents)
Private Agents Cannot Reach Azure DevOps
Error: Agent connection to Azure DevOps failed - network timeout
NSG rules may be too restrictive. Azure DevOps requires outbound HTTPS to multiple endpoints:
# Required outbound endpoints for Azure DevOps agents
# dev.azure.com
# *.visualstudio.com
# vsrm.dev.azure.com
# *.pkgs.visualstudio.com
# login.microsoftonline.com
# Use Azure DevOps service tag in NSG
az network nsg rule create \
--nsg-name agents-nsg \
--resource-group rg \
--name allow-azuredevops \
--priority 100 \
--direction Outbound \
--access Allow \
--protocol Tcp \
--destination-port-ranges 443 \
--destination-address-prefixes AzureDevOps
PIM Activation Takes Too Long for Emergency Deployments
When JIT access through PIM takes 5-15 minutes and you need emergency production access:
Create a "break glass" procedure with pre-activated emergency accounts:
// Break glass account verification - run weekly
function verifyBreakGlassAccounts() {
var accounts = [
{ name: "[email protected]", lastVerified: null },
{ name: "[email protected]", lastVerified: null }
];
accounts.forEach(function(account) {
console.log("Verifying: " + account.name);
// These accounts should:
// 1. Be excluded from conditional access
// 2. Have permanent PCA role (not PIM-activated)
// 3. Use hardware security keys, not passwords
// 4. Be stored in a physical safe, not shared digitally
// 5. Usage should trigger immediate alerts
});
}
Device Compliance Policy Blocks Developer Machines
Error: Access denied - device is not compliant with organization policy
When developers' machines fail Intune compliance checks, they cannot access Azure DevOps. Common causes: outdated OS, disabled firewall, missing antivirus. Solution: Set up a compliance dashboard showing which machines are non-compliant and give developers a self-service remediation guide.
Best Practices
- Start with identity, then expand to network and data — MFA and conditional access give you the highest security impact with the least disruption. Network segmentation and private endpoints come next.
- Use managed identities instead of secrets wherever possible — Every secret you eliminate is one less thing to rotate, one less thing that can leak. Managed identities on Azure VMs, App Services, and Container Instances remove credentials entirely.
- Implement environment isolation with separate service connections — Each environment (dev, staging, production) should have its own service principal with permissions scoped to that environment only. Never share production credentials with lower environments.
- Enable audit logging and stream to external SIEM — Azure DevOps audit logs are essential for Zero Trust monitoring. Stream them to Log Analytics, Splunk, or Sentinel for correlation with other security events.
- Enforce branch policies and require security scans before merge — Every pull request should pass security scanning. Block merges that introduce critical vulnerabilities. This is your first line of defense.
- Use private build agents in a VNet for production deployments — Microsoft-hosted agents share infrastructure with all Azure DevOps customers. For production deployments, use private agents in your own VNet with controlled network access.
- Run Zero Trust compliance checks continuously, not just once — Security configurations drift. Run automated compliance checks weekly and alert when any control degrades.
- Plan for break-glass scenarios — Zero Trust can lock everyone out during a crisis. Maintain emergency access accounts with documented procedures, stored physical tokens, and immediate usage alerting.