Security Scanning Tools for Azure Pipelines
Integrate comprehensive security scanning into Azure Pipelines with SAST, DAST, container scanning, and dependency vulnerability detection
Security Scanning Tools for Azure Pipelines
Security scanning in CI/CD is not optional anymore. If you are shipping code without automated vulnerability detection, you are gambling with your organization's reputation and your users' data. Azure Pipelines gives you the flexibility to integrate a wide range of security scanning tools — from static analysis and dependency auditing to container scanning and dynamic application testing — directly into your build and release workflows.
This article walks through the major security scanning tools you can wire into Azure Pipelines, how to configure each one, and how to aggregate results into a unified report that gives your team actionable findings.
Prerequisites
- An Azure DevOps organization with at least one pipeline
- A Node.js project (the examples target Node, but most tools are language-agnostic)
- Basic familiarity with YAML pipeline syntax
- Docker installed on your build agent (for container scanning)
- Agent pool running a Linux-based agent (Ubuntu recommended) for most tools
The Security Scanning Landscape
Security scanning breaks down into several categories, and you need coverage across all of them to have a real security posture:
| Category | What It Catches | Tools |
|---|---|---|
| SAST (Static Application Security Testing) | Code-level vulnerabilities, injection flaws, insecure patterns | SonarQube, Microsoft Security DevOps, ESLint security plugins |
| SCA (Software Composition Analysis) | Vulnerable dependencies, outdated packages | npm audit, Snyk, Trivy |
| Container Scanning | Vulnerable base images, misconfigured containers | Trivy, Grype, Microsoft Defender |
| Secret Detection | Hardcoded credentials, API keys in source | CredScan, gitleaks, truffleHog |
| DAST (Dynamic Application Security Testing) | Runtime vulnerabilities, XSS, injection via HTTP | OWASP ZAP, Burp Suite |
| License Compliance | Restrictive or incompatible licenses in dependencies | license-checker, FOSSA |
A mature pipeline implements tools from each category. You do not need every tool listed — pick one per category and do it well.
Microsoft Security DevOps Extension
Microsoft provides a first-party extension called Microsoft Security DevOps (MSDO) that bundles several scanners into a single pipeline task. It includes ESLint with security rules, Terrascan for IaC, Trivy, CredScan, and more.
Install the extension from the Azure DevOps Marketplace, then add it to your pipeline:
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: MicrosoftSecurityDevOps@1
displayName: 'Run Microsoft Security DevOps'
inputs:
categories: 'secrets,code,artifacts,IaC'
tools: 'eslint,trivy,credscan,terrascan'
break: true
The break: true input causes the task to fail the pipeline if any critical or high severity findings are detected. The categories input lets you scope which scanning categories to run. The tools input specifies the exact scanners.
Results automatically publish to the Advanced Security tab in Azure DevOps if you have GitHub Advanced Security for Azure DevOps enabled. Otherwise, results are available in the build logs and as SARIF artifacts.
The advantage of MSDO is simplicity — one task, multiple scanners. The downside is less control over individual tool configuration. For production pipelines, I prefer configuring each tool individually.
Trivy for Container and Filesystem Scanning
Trivy is one of the most versatile open-source scanners available. It handles container images, filesystems, Git repositories, and Kubernetes manifests. For Node.js projects, Trivy can scan both your node_modules directory and your Docker images.
Filesystem Scanning
Scan your project directory for vulnerable dependencies:
steps:
- script: |
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin v0.50.0
displayName: 'Install Trivy'
- script: |
trivy filesystem . \
--severity HIGH,CRITICAL \
--format json \
--output $(Build.ArtifactStagingDirectory)/trivy-fs-results.json \
--exit-code 1
displayName: 'Trivy Filesystem Scan'
continueOnError: true
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)/trivy-fs-results.json'
artifactName: 'SecurityScanResults'
Container Image Scanning
After building your Docker image, scan it before pushing to a registry:
steps:
- task: Docker@2
displayName: 'Build Docker Image'
inputs:
command: 'build'
Dockerfile: 'Dockerfile'
tags: '$(Build.BuildId)'
repository: 'myapp'
- script: |
trivy image \
--severity HIGH,CRITICAL \
--format json \
--output $(Build.ArtifactStagingDirectory)/trivy-image-results.json \
--exit-code 1 \
myapp:$(Build.BuildId)
displayName: 'Trivy Image Scan'
continueOnError: true
Trivy output for a Node.js image typically looks like this:
myapp:latest (ubuntu 22.04)
============================
Total: 14 (HIGH: 11, CRITICAL: 3)
┌──────────────────┬────────────────┬──────────┬───────────────────┬───────────────┬──────────────────────────────────────┐
│ Library │ Vulnerability │ Severity │ Installed Version │ Fixed Version │ Title │
├──────────────────┼────────────────┼──────────┼───────────────────┼───────────────┼──────────────────────────────────────┤
│ libssl3 │ CVE-2024-5535 │ CRITICAL │ 3.0.2-0ubuntu1.14 │ 3.0.2-0ubuntu │ openssl: SSL_select_next_proto │
│ │ │ │ │ 1.16 │ buffer overread │
├──────────────────┼────────────────┼──────────┼───────────────────┼───────────────┼──────────────────────────────────────┤
│ express │ CVE-2024-29041 │ HIGH │ 4.18.2 │ 4.19.2 │ Express.js open redirect │
│ │ │ │ │ │ vulnerability │
└──────────────────┴────────────────┴──────────┴───────────────────┴───────────────┴──────────────────────────────────────┘
Node.js (package-lock.json)
============================
Total: 7 (HIGH: 5, CRITICAL: 2)
The --exit-code 1 flag is critical. Without it, Trivy exits 0 regardless of findings, and your pipeline will not catch vulnerabilities.
npm audit Integration
The simplest security check for Node.js projects is npm audit, and there is no excuse for not running it. It checks your dependency tree against the npm advisory database.
steps:
- task: NodeTool@0
inputs:
versionSpec: '20.x'
- script: npm ci
displayName: 'Install Dependencies'
- script: |
npm audit --json > $(Build.ArtifactStagingDirectory)/npm-audit-results.json || true
npm audit --audit-level=high
displayName: 'npm Audit'
The first npm audit --json captures full results for reporting. The second npm audit --audit-level=high will exit non-zero only for high and critical vulnerabilities — you do not want to break builds over low-severity advisories.
A common pattern is to maintain an .npmrc file with audit-level=high so that developers also see appropriate warnings locally:
audit-level=high
Handling Known Vulnerabilities
Sometimes npm audit flags vulnerabilities in dev dependencies or in deeply nested transitive dependencies that you cannot fix immediately. Use an audit.json override file:
{
"overrides": {
"semver": ">=7.5.2",
"tough-cookie": ">=4.1.3"
}
}
Place overrides in package.json to force resolution of known vulnerable versions.
Snyk Integration in Pipelines
Snyk provides deeper dependency analysis than npm audit, including reachability analysis (is the vulnerable code actually called?) and fix PRs. The free tier covers open-source scanning generously.
Install the Snyk extension from the Azure DevOps Marketplace, then configure:
steps:
- task: SnykSecurityScan@1
displayName: 'Snyk Security Scan'
inputs:
serviceConnectionEndpoint: 'SnykConnection'
testType: 'app'
monitorWhen: 'always'
failOnIssues: true
severityThreshold: 'high'
additionalArguments: '--json-file-output=$(Build.ArtifactStagingDirectory)/snyk-results.json'
If you prefer not to use the marketplace task, run the Snyk CLI directly:
steps:
- script: |
npm install -g snyk
snyk auth $(SNYK_TOKEN)
snyk test --severity-threshold=high --json > $(Build.ArtifactStagingDirectory)/snyk-results.json || true
snyk test --severity-threshold=high
displayName: 'Snyk Security Test'
env:
SNYK_TOKEN: $(SnykToken)
The SNYK_TOKEN should be stored as a secret variable in your pipeline. Never put tokens in YAML files.
Snyk also supports container scanning:
- script: |
snyk container test myapp:$(Build.BuildId) \
--severity-threshold=high \
--json > $(Build.ArtifactStagingDirectory)/snyk-container-results.json || true
displayName: 'Snyk Container Scan'
env:
SNYK_TOKEN: $(SnykToken)
Credential Scanning with CredScan and gitleaks
Leaked credentials in source code are one of the most common and most damaging security failures. Two tools worth integrating are CredScan (Microsoft's proprietary scanner) and gitleaks (open source).
CredScan
CredScan is available through the Microsoft Security DevOps extension or as a standalone task if your organization has the appropriate licensing:
steps:
- task: CredScan@3
displayName: 'Run CredScan'
inputs:
outputFormat: 'sarif'
scanFolder: '$(Build.SourcesDirectory)'
suppressionsFile: '.config/credScanSuppressions.json'
- task: PostAnalysis@2
displayName: 'Check CredScan Results'
inputs:
CredScan: true
ToolLogsNotFoundAction: 'Standard'
gitleaks
gitleaks is open source and catches hardcoded secrets, API keys, and tokens. It scans the entire Git history, not just the current commit:
steps:
- script: |
wget -q https://github.com/gitleaks/gitleaks/releases/download/v8.18.0/gitleaks_8.18.0_linux_x64.tar.gz
tar -xzf gitleaks_8.18.0_linux_x64.tar.gz
./gitleaks detect \
--source=$(Build.SourcesDirectory) \
--report-format=json \
--report-path=$(Build.ArtifactStagingDirectory)/gitleaks-results.json \
--exit-code=1
displayName: 'gitleaks Secret Scan'
continueOnError: true
You will want a .gitleaks.toml configuration file to suppress false positives:
[allowlist]
description = "Allowlisted patterns"
paths = [
'''package-lock\.json''',
'''\.test\.js$''',
'''fixtures/'''
]
regexes = [
'''EXAMPLE_KEY_[A-Z]+''',
'''test-api-key-\d+'''
]
Typical gitleaks output:
Finding: AKIAIOSFODNN7EXAMPLE
Secret: AKIAIOSFODNN7EXAMPLE
RuleID: aws-access-key-id
Entropy: 3.684
File: config/aws.js
Line: 14
Commit: a1b2c3d4e5f6
Author: [email protected]
Date: 2025-11-15T10:30:00Z
OWASP ZAP for DAST
Static analysis catches code-level issues. Dynamic Application Security Testing (DAST) catches runtime vulnerabilities by actually sending requests to your running application. OWASP ZAP is the industry standard open-source DAST tool.
Running ZAP in a pipeline requires your application to be deployed or running as a service. A common approach is to spin up the app in the pipeline, run ZAP against it, then tear it down:
steps:
- script: |
npm ci
npm start &
sleep 10
displayName: 'Start Application'
- script: |
docker run --rm --network host \
-v $(Build.ArtifactStagingDirectory):/zap/wrk \
ghcr.io/zaproxy/zaproxy:stable \
zap-baseline.py \
-t http://localhost:8080 \
-J zap-results.json \
-r zap-report.html \
-c zap-rules.conf \
-I
displayName: 'OWASP ZAP Baseline Scan'
continueOnError: true
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)/zap-report.html'
artifactName: 'ZapReport'
The -I flag tells ZAP to return informational results without failing. For stricter enforcement, remove it and use -c with a rules configuration file:
# zap-rules.conf
# Rule ID Action Description
10010 FAIL # Cookie No HttpOnly Flag
10011 FAIL # Cookie Without Secure Flag
10015 WARN # Incomplete or No Cache-control
10020 FAIL # Anti-clickjacking Header Missing
10021 FAIL # X-Content-Type-Options Header Missing
10038 FAIL # Content Security Policy Header Not Set
40012 FAIL # Cross Site Scripting (Reflected)
40014 FAIL # Cross Site Scripting (Persistent)
90011 WARN # Charset Mismatch
ZAP has three scan modes: baseline (passive, fast), API scan (targets OpenAPI specs), and full scan (active, slow, thorough). Use baseline in CI and reserve full scans for nightly or release pipelines.
SonarQube Security Rules
SonarQube is primarily a code quality tool, but its security rules are substantial. It detects SQL injection patterns, XSS vulnerabilities, insecure cryptography usage, and more.
steps:
- task: SonarQubePrepare@5
inputs:
SonarQube: 'SonarQubeConnection'
scannerMode: 'CLI'
configMode: 'manual'
cliProjectKey: 'my-nodejs-app'
cliSources: 'src'
extraProperties: |
sonar.javascript.lcov.reportPaths=coverage/lcov.info
sonar.qualitygate.wait=true
sonar.qualitygate.timeout=300
- script: npm test -- --coverage
displayName: 'Run Tests with Coverage'
- task: SonarQubeAnalyze@5
displayName: 'Run SonarQube Analysis'
- task: SonarQubePublish@5
displayName: 'Publish Quality Gate Result'
The key setting is sonar.qualitygate.wait=true. This blocks the pipeline until SonarQube finishes analysis and returns the quality gate status. If your quality gate includes security hotspot review thresholds, the pipeline fails when unreviewed hotspots exceed the limit.
For the security-specific quality gate, configure these conditions in SonarQube:
- Vulnerabilities: 0 new vulnerabilities on new code
- Security Hotspots Reviewed: 100% on new code
- Security Rating: A on new code
License Compliance Scanning
License compliance is a security concern that teams often overlook. Using a GPL-licensed dependency in a proprietary project can create legal liability. Scanning for license compliance belongs in your security pipeline.
steps:
- script: |
npm ci
npx license-checker --json --out $(Build.ArtifactStagingDirectory)/licenses.json
npx license-checker --failOn 'GPL-2.0;GPL-3.0;AGPL-1.0;AGPL-3.0' --summary
displayName: 'License Compliance Check'
The --failOn flag accepts a semicolon-separated list of licenses that should break the build. This gives you hard enforcement that no one accidentally introduces a copyleft dependency.
Scan Result Aggregation and Reporting
Running five different scanners produces five different report formats. You need a unified view. Here is a Node.js script that aggregates results from multiple tools into a single security report:
// aggregate-security-results.js
var fs = require("fs");
var path = require("path");
function loadJsonSafe(filePath) {
try {
var content = fs.readFileSync(filePath, "utf8");
return JSON.parse(content);
} catch (err) {
console.log("Warning: Could not load " + filePath + ": " + err.message);
return null;
}
}
function parseTrivyResults(data) {
var findings = [];
if (!data || !data.Results) return findings;
data.Results.forEach(function(result) {
if (!result.Vulnerabilities) return;
result.Vulnerabilities.forEach(function(vuln) {
findings.push({
tool: "trivy",
severity: vuln.Severity,
id: vuln.VulnerabilityID,
package: vuln.PkgName,
installed: vuln.InstalledVersion,
fixed: vuln.FixedVersion || "N/A",
title: vuln.Title || vuln.Description || "No description",
target: result.Target
});
});
});
return findings;
}
function parseNpmAuditResults(data) {
var findings = [];
if (!data || !data.vulnerabilities) return findings;
Object.keys(data.vulnerabilities).forEach(function(pkgName) {
var vuln = data.vulnerabilities[pkgName];
findings.push({
tool: "npm-audit",
severity: vuln.severity.toUpperCase(),
id: vuln.via && vuln.via[0] && vuln.via[0].url ? vuln.via[0].url : "N/A",
package: pkgName,
installed: vuln.range || "unknown",
fixed: vuln.fixAvailable ? "yes" : "no",
title: vuln.via && vuln.via[0] && vuln.via[0].title ? vuln.via[0].title : "Vulnerable dependency",
target: "package.json"
});
});
return findings;
}
function parseGitleaksResults(data) {
var findings = [];
if (!data || !Array.isArray(data)) return findings;
data.forEach(function(leak) {
findings.push({
tool: "gitleaks",
severity: "CRITICAL",
id: leak.RuleID,
package: "N/A",
installed: "N/A",
fixed: "Remove secret and rotate",
title: "Secret detected: " + leak.RuleID,
target: leak.File + ":" + leak.StartLine
});
});
return findings;
}
function parseSnykResults(data) {
var findings = [];
if (!data || !data.vulnerabilities) return findings;
data.vulnerabilities.forEach(function(vuln) {
findings.push({
tool: "snyk",
severity: vuln.severity.toUpperCase(),
id: vuln.id,
package: vuln.packageName,
installed: vuln.version,
fixed: vuln.fixedIn ? vuln.fixedIn.join(", ") : "N/A",
title: vuln.title,
target: vuln.from ? vuln.from.join(" > ") : "unknown"
});
});
return findings;
}
function generateReport(allFindings) {
var summary = { CRITICAL: 0, HIGH: 0, MEDIUM: 0, LOW: 0 };
allFindings.forEach(function(f) {
if (summary[f.severity] !== undefined) {
summary[f.severity]++;
}
});
var report = {
timestamp: new Date().toISOString(),
summary: summary,
totalFindings: allFindings.length,
criticalAndHigh: summary.CRITICAL + summary.HIGH,
findings: allFindings,
toolCoverage: {}
};
allFindings.forEach(function(f) {
if (!report.toolCoverage[f.tool]) {
report.toolCoverage[f.tool] = 0;
}
report.toolCoverage[f.tool]++;
});
return report;
}
function main() {
var resultsDir = process.argv[2] || "./security-results";
var allFindings = [];
var trivyData = loadJsonSafe(path.join(resultsDir, "trivy-fs-results.json"));
allFindings = allFindings.concat(parseTrivyResults(trivyData));
var npmData = loadJsonSafe(path.join(resultsDir, "npm-audit-results.json"));
allFindings = allFindings.concat(parseNpmAuditResults(npmData));
var gitleaksData = loadJsonSafe(path.join(resultsDir, "gitleaks-results.json"));
allFindings = allFindings.concat(parseGitleaksResults(gitleaksData));
var snykData = loadJsonSafe(path.join(resultsDir, "snyk-results.json"));
allFindings = allFindings.concat(parseSnykResults(snykData));
var report = generateReport(allFindings);
var outputPath = path.join(resultsDir, "unified-security-report.json");
fs.writeFileSync(outputPath, JSON.stringify(report, null, 2));
console.log("=== Security Scan Summary ===");
console.log("Total findings: " + report.totalFindings);
console.log("Critical: " + report.summary.CRITICAL);
console.log("High: " + report.summary.HIGH);
console.log("Medium: " + report.summary.MEDIUM);
console.log("Low: " + report.summary.LOW);
console.log("");
console.log("Tool coverage:");
Object.keys(report.toolCoverage).forEach(function(tool) {
console.log(" " + tool + ": " + report.toolCoverage[tool] + " findings");
});
console.log("");
console.log("Report written to: " + outputPath);
if (report.criticalAndHigh > 0) {
console.log("##vso[task.logissue type=error]Found " + report.criticalAndHigh + " critical/high severity findings");
process.exit(1);
}
console.log("##vso[task.complete result=Succeeded;]No critical or high findings");
}
main();
The ##vso[task.logissue] and ##vso[task.complete] lines are Azure DevOps logging commands. They surface errors directly in the pipeline UI rather than burying them in log output.
Breaking Builds on Critical Findings
The aggregation script above exits with code 1 when critical or high findings exist. But you need more granular control. Here is a dedicated gate script:
// security-gate.js
var fs = require("fs");
var reportPath = process.argv[2] || "./security-results/unified-security-report.json";
var maxCritical = parseInt(process.argv[3] || "0", 10);
var maxHigh = parseInt(process.argv[4] || "0", 10);
var report = JSON.parse(fs.readFileSync(reportPath, "utf8"));
var passed = true;
if (report.summary.CRITICAL > maxCritical) {
console.log("##vso[task.logissue type=error]GATE FAILED: " +
report.summary.CRITICAL + " critical findings (max: " + maxCritical + ")");
passed = false;
}
if (report.summary.HIGH > maxHigh) {
console.log("##vso[task.logissue type=error]GATE FAILED: " +
report.summary.HIGH + " high findings (max: " + maxHigh + ")");
passed = false;
}
if (!passed) {
console.log("");
console.log("Critical and High findings:");
report.findings
.filter(function(f) { return f.severity === "CRITICAL" || f.severity === "HIGH"; })
.forEach(function(f) {
console.log(" [" + f.severity + "] " + f.tool + ": " + f.title + " (" + f.package + ")");
});
process.exit(1);
}
console.log("Security gate passed. " +
report.summary.CRITICAL + " critical, " +
report.summary.HIGH + " high findings within threshold.");
Custom Security Gates
Beyond simple threshold checks, you can implement custom gates that account for business context. For example, allowing known vulnerabilities that have accepted risk tickets:
// custom-security-gate.js
var fs = require("fs");
var acceptedRisksPath = "./security/accepted-risks.json";
var reportPath = process.argv[2] || "./security-results/unified-security-report.json";
function loadAcceptedRisks() {
try {
return JSON.parse(fs.readFileSync(acceptedRisksPath, "utf8"));
} catch (err) {
return [];
}
}
function isAccepted(finding, acceptedRisks) {
return acceptedRisks.some(function(risk) {
var idMatch = risk.id === finding.id;
var notExpired = new Date(risk.expiresAt) > new Date();
return idMatch && notExpired;
});
}
var report = JSON.parse(fs.readFileSync(reportPath, "utf8"));
var acceptedRisks = loadAcceptedRisks();
var unacceptedCritical = report.findings.filter(function(f) {
return f.severity === "CRITICAL" && !isAccepted(f, acceptedRisks);
});
var unacceptedHigh = report.findings.filter(function(f) {
return f.severity === "HIGH" && !isAccepted(f, acceptedRisks);
});
console.log("Total findings: " + report.totalFindings);
console.log("Accepted risks: " + acceptedRisks.length);
console.log("Unaccepted critical: " + unacceptedCritical.length);
console.log("Unaccepted high: " + unacceptedHigh.length);
if (unacceptedCritical.length > 0 || unacceptedHigh.length > 0) {
console.log("##vso[task.logissue type=error]Unaccepted critical/high findings detected");
unacceptedCritical.concat(unacceptedHigh).forEach(function(f) {
console.log(" [" + f.severity + "] " + f.id + " - " + f.title);
});
process.exit(1);
}
console.log("##vso[task.complete result=Succeeded;]All findings accepted or below threshold");
The accepted-risks.json file looks like this:
[
{
"id": "CVE-2024-29041",
"reason": "Express route not exposed externally. Risk accepted per JIRA-1234",
"acceptedBy": "[email protected]",
"acceptedAt": "2025-12-01T00:00:00Z",
"expiresAt": "2026-06-01T00:00:00Z"
}
]
Notice the expiresAt field. Accepted risks should never be permanent. Force re-evaluation on a schedule.
Complete Working Example
Here is a comprehensive pipeline YAML that ties everything together:
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
variables:
nodeVersion: '20.x'
trivyVersion: '0.50.0'
gitleaksVersion: '8.18.0'
stages:
- stage: SecurityScans
displayName: 'Security Scanning'
jobs:
- job: DependencyScanning
displayName: 'Dependency & License Scanning'
steps:
- task: NodeTool@0
inputs:
versionSpec: '$(nodeVersion)'
- script: npm ci
displayName: 'Install Dependencies'
- script: |
npm audit --json > $(Build.ArtifactStagingDirectory)/npm-audit-results.json || true
displayName: 'npm Audit (JSON)'
- script: |
npx license-checker --json --out $(Build.ArtifactStagingDirectory)/licenses.json
npx license-checker --failOn 'GPL-2.0;GPL-3.0;AGPL-1.0;AGPL-3.0' --summary
displayName: 'License Compliance'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'DependencyResults'
- job: StaticAnalysis
displayName: 'SAST & Secret Scanning'
steps:
- script: |
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin v$(trivyVersion)
displayName: 'Install Trivy'
- script: |
trivy filesystem . \
--severity HIGH,CRITICAL \
--format json \
--output $(Build.ArtifactStagingDirectory)/trivy-fs-results.json \
--exit-code 0
displayName: 'Trivy Filesystem Scan'
- script: |
wget -q https://github.com/gitleaks/gitleaks/releases/download/v$(gitleaksVersion)/gitleaks_$(gitleaksVersion)_linux_x64.tar.gz
tar -xzf gitleaks_$(gitleaksVersion)_linux_x64.tar.gz
./gitleaks detect \
--source=$(Build.SourcesDirectory) \
--report-format=json \
--report-path=$(Build.ArtifactStagingDirectory)/gitleaks-results.json \
--exit-code 0 \
--verbose
displayName: 'gitleaks Secret Scan'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'StaticAnalysisResults'
- job: ContainerScanning
displayName: 'Container Image Scanning'
steps:
- task: Docker@2
displayName: 'Build Image'
inputs:
command: 'build'
Dockerfile: 'Dockerfile'
tags: '$(Build.BuildId)'
repository: 'myapp'
- script: |
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin v$(trivyVersion)
trivy image \
--severity HIGH,CRITICAL \
--format json \
--output $(Build.ArtifactStagingDirectory)/trivy-image-results.json \
--exit-code 0 \
myapp:$(Build.BuildId)
displayName: 'Trivy Image Scan'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'ContainerResults'
- stage: SecurityGate
displayName: 'Security Gate'
dependsOn: SecurityScans
jobs:
- job: AggregateAndGate
displayName: 'Aggregate Results & Enforce Gate'
steps:
- task: NodeTool@0
inputs:
versionSpec: '$(nodeVersion)'
- task: DownloadBuildArtifacts@1
inputs:
buildType: 'current'
downloadType: 'specific'
itemPattern: '**/*.json'
downloadPath: '$(Build.SourcesDirectory)/security-results'
- script: |
mkdir -p ./security-results/merged
find ./security-results -name "*.json" -exec cp {} ./security-results/merged/ \;
displayName: 'Merge Result Files'
- script: |
node aggregate-security-results.js ./security-results/merged
displayName: 'Aggregate Security Results'
- script: |
node security-gate.js ./security-results/merged/unified-security-report.json 0 3
displayName: 'Enforce Security Gate'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.SourcesDirectory)/security-results/merged/unified-security-report.json'
artifactName: 'UnifiedSecurityReport'
condition: always()
- stage: DAST
displayName: 'Dynamic Security Testing'
dependsOn: SecurityGate
jobs:
- job: ZapScan
displayName: 'OWASP ZAP Baseline Scan'
steps:
- task: NodeTool@0
inputs:
versionSpec: '$(nodeVersion)'
- script: |
npm ci
npm start &
sleep 15
displayName: 'Start Application'
- script: |
docker run --rm --network host \
-v $(Build.ArtifactStagingDirectory):/zap/wrk \
ghcr.io/zaproxy/zaproxy:stable \
zap-baseline.py \
-t http://localhost:8080 \
-J zap-results.json \
-r zap-report.html \
-I
displayName: 'OWASP ZAP Scan'
continueOnError: true
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)'
artifactName: 'DastResults'
condition: always()
This pipeline runs dependency scanning, static analysis, and container scanning in parallel during the first stage. Results are collected and evaluated in the security gate stage. Only if the gate passes does DAST run against the live application in the final stage.
Common Issues and Troubleshooting
1. Trivy failing with "db open error"
FATAL db open error: unable to open DB: open /root/.cache/trivy/db/trivy.db: no such file or directory
This happens when Trivy cannot download its vulnerability database. On self-hosted agents behind a corporate proxy, the database download gets blocked. Fix it by pre-caching the database or configuring proxy settings:
- script: |
export HTTP_PROXY=http://proxy.corp.com:8080
export HTTPS_PROXY=http://proxy.corp.com:8080
trivy filesystem --download-db-only
trivy filesystem . --skip-db-update --format json --output results.json
displayName: 'Trivy with Proxy'
2. npm audit returning exit code 1 and breaking unrelated steps
npm ERR! code EAUDITNOLOCK
npm ERR! audit Neither combatible lockfile nor package.json found.
This occurs when npm audit runs before npm install or npm ci. Always install dependencies first. Also, if you are using npm audit --json and piping output, the non-zero exit code from audit itself will fail the step. Use || true to capture results without breaking:
- script: |
npm audit --json > results.json || true
displayName: 'Capture Audit Results'
3. gitleaks scanning the entire Git history and timing out
time="2025-12-01T10:00:00Z" level=info msg="scanning 14,532 commits..."
##[error]The job running on agent Hosted Agent ran longer than the maximum allowed time of 60 minutes.
For large repositories, scanning the full history takes too long. Limit the scan to the current commit or the diff from the target branch:
- script: |
./gitleaks detect \
--source=$(Build.SourcesDirectory) \
--log-opts="--since='30 days ago'" \
--report-format=json \
--report-path=gitleaks-results.json
displayName: 'gitleaks - Last 30 Days Only'
4. OWASP ZAP cannot reach localhost from Docker
ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded
When ZAP runs in a Docker container on a Linux agent, localhost refers to the container's own network namespace, not the host. Use --network host in the Docker run command, or replace localhost with the host's IP. On Azure DevOps hosted agents, --network host works reliably:
- script: |
docker run --rm --network host \
ghcr.io/zaproxy/zaproxy:stable \
zap-baseline.py -t http://localhost:8080 -I
If that still fails, check that the application is actually listening. Add a health check wait loop:
- script: |
for i in $(seq 1 30); do
curl -s http://localhost:8080/health && break
echo "Waiting for app... attempt $i"
sleep 2
done
displayName: 'Wait for Application'
5. Snyk CLI authentication failing in pipeline
Error: Auth failed! Please check your API token and try again.
MissingApiTokenError: `snyk` requires an authenticated account.
The SNYK_TOKEN must be set as an environment variable, not a pipeline variable directly in YAML. Use a variable group or secret variable:
- script: |
snyk auth $SNYK_TOKEN
snyk test --severity-threshold=high
displayName: 'Snyk Test'
env:
SNYK_TOKEN: $(SnykToken)
The env block maps the secret variable to an environment variable. Without this mapping, the value is not available to the script.
Best Practices
Scan early, scan often. Run dependency and secret scanning on every pull request. Reserve DAST and full container scans for merge-to-main or nightly pipelines. The faster developers get feedback, the cheaper the fix.
Never suppress findings permanently. Use accepted risk files with expiration dates. Every suppression should reference a ticket number and an owner. Review accepted risks quarterly.
Pin scanner versions. Using
trivy:latestorgitleaks:latestmeans your pipeline behavior changes without any code change. Pin to specific versions and update deliberately.Separate scanning from gating. Run all scanners with
continueOnError: trueor--exit-code 0, collect all results, then make a single pass/fail decision in an aggregation step. This way you get the complete picture, not just the first failure.Store results as pipeline artifacts. Every scan result should be published as a build artifact. This creates an audit trail and lets you trend vulnerabilities over time. Feed these into a dashboard or a tool like DefectDojo for tracking.
Use SARIF format when available. SARIF (Static Analysis Results Interchange Format) is the standard format that Azure DevOps, GitHub, and most security tools understand. Trivy, gitleaks, and SonarQube all support SARIF output. Using a common format simplifies aggregation.
Run DAST in a staging environment, not production. ZAP's active scanning sends malicious payloads. You do not want SQL injection test strings hitting your production database. Always target a disposable or staging environment.
Implement graduated severity gates. Start by failing only on critical findings. Once the team has cleaned those up, tighten the gate to include high severity. Trying to enforce zero vulnerabilities on day one will result in the security gate being disabled entirely.
Monitor scanner coverage, not just findings. Track which scanners ran successfully. A pipeline that skips the secret scanner due to an installation failure is worse than one with known findings, because you have a blind spot you do not know about.
Keep security configuration in the repository. The
.gitleaks.toml,zap-rules.conf,accepted-risks.json, and gate scripts should all live in version control alongside the application code. Security configuration is code.
References
- Trivy Documentation - Comprehensive vulnerability scanner
- gitleaks GitHub Repository - Secret detection tool
- OWASP ZAP Documentation - Dynamic application security testing
- Microsoft Security DevOps Extension - Azure DevOps marketplace
- Snyk CLI Documentation - Dependency vulnerability scanning
- SonarQube Security Rules - JavaScript security rules
- SARIF Specification - Static Analysis Results Interchange Format
- Azure Pipelines Security Best Practices - Microsoft's official guidance
- npm audit Documentation - Built-in Node.js dependency auditing