SAST and DAST Integration in CI/CD
Implement comprehensive SAST and DAST security testing in Azure DevOps CI/CD pipelines with vulnerability correlation and automated gating
SAST and DAST Integration in CI/CD
Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST) are complementary approaches to finding vulnerabilities in your applications before attackers do. SAST analyzes your source code without running it, catching issues like SQL injection patterns, hardcoded secrets, and insecure dependencies at build time. DAST attacks your running application from the outside, finding runtime vulnerabilities like authentication bypasses, misconfigured headers, and actual exploitable injection points. When you integrate both into your Azure DevOps CI/CD pipeline, you get comprehensive coverage that neither approach delivers alone.
Prerequisites
- Azure DevOps organization with Pipelines enabled
- Node.js 18+ installed on build agents
- Basic understanding of YAML pipeline syntax
- Docker available on agents (for DAST tooling)
- A deployable test environment for DAST scanning
- Familiarity with npm security tooling
SAST vs DAST: Fundamentals
Before wiring anything into your pipeline, you need to understand what each approach actually catches and where it falls short.
SAST reads your source code and applies pattern matching, data flow analysis, and taint tracking to find potential vulnerabilities. It runs fast, catches issues early, and works without deploying anything. The downside is false positives. SAST tools flag code patterns that look dangerous even when runtime context makes them safe. SAST also cannot find configuration issues, broken authentication flows, or vulnerabilities that only manifest at runtime.
DAST sends real HTTP requests against a running application. It discovers what an attacker would find: exposed endpoints, missing security headers, injection vulnerabilities that actually work, broken access controls. The downside is that DAST requires a running environment, takes longer to execute, and cannot tell you which line of code caused the vulnerability.
Here is when to use each:
| Aspect | SAST | DAST |
|---|---|---|
| When it runs | Build time | After deployment |
| What it needs | Source code | Running application URL |
| Speed | Fast (minutes) | Slower (10-60+ minutes) |
| False positive rate | High | Low to moderate |
| Coverage | All code paths | Only reachable endpoints |
| Fix guidance | Points to exact code line | Points to vulnerable endpoint |
| Best for | Injection patterns, secrets, unsafe APIs | Auth issues, header misconfig, runtime bugs |
The key insight is that SAST and DAST findings overlap on some vulnerabilities (like SQL injection), but each catches things the other misses entirely. You need both.
SAST Tools for Node.js
ESLint Security Plugins
The fastest way to add SAST to a Node.js project is through ESLint security plugins. These integrate with your existing linting setup and catch common vulnerability patterns.
Install the security plugins:
npm install --save-dev eslint-plugin-security eslint-plugin-no-unsanitized
Create or update your .eslintrc.json:
{
"plugins": ["security", "no-unsanitized"],
"extends": [
"plugin:security/recommended-legacy"
],
"rules": {
"security/detect-object-injection": "warn",
"security/detect-non-literal-regexp": "error",
"security/detect-non-literal-require": "error",
"security/detect-non-literal-fs-filename": "warn",
"security/detect-eval-with-expression": "error",
"security/detect-child-process": "warn",
"security/detect-possible-timing-attacks": "warn",
"no-unsanitized/method": "error",
"no-unsanitized/property": "error"
}
}
ESLint security plugins are lightweight and fast, but they only catch surface-level patterns. For deeper analysis, you need Semgrep or CodeQL.
Semgrep
Semgrep is a pattern-matching SAST tool that understands code structure, not just text patterns. It supports custom rules and ships with thousands of community rules for Node.js.
Create a .semgrep.yml in your project root with custom rules:
rules:
- id: express-sql-injection
patterns:
- pattern: |
$DB.query($ARG + ...)
- pattern-not: |
$DB.query($ARG + $PARAM, ...)
message: "Possible SQL injection via string concatenation in query"
severity: ERROR
languages: [javascript]
- id: express-open-redirect
patterns:
- pattern: |
res.redirect($REQ.$PROP.$FIELD)
message: "Open redirect vulnerability - user input flows to redirect"
severity: WARNING
languages: [javascript]
- id: hardcoded-jwt-secret
pattern: |
jwt.sign($PAYLOAD, "...")
message: "JWT signed with hardcoded secret"
severity: ERROR
languages: [javascript]
Run Semgrep locally to validate your rules:
npx semgrep --config .semgrep.yml --config "p/nodejs" --json --output semgrep-results.json .
CodeQL
CodeQL is GitHub's and Microsoft's deep SAST engine. It builds a queryable database of your code and runs dataflow analysis. Azure DevOps integrates with CodeQL through the Advanced Security features or by running it directly on agents.
Create a CodeQL query for Node.js taint tracking in queries/express-injection.ql:
/**
* @name User input flows to dangerous sink
* @kind path-problem
* @severity error
*/
import javascript
import DataFlow::PathGraph
class ExpressSource extends DataFlow::Node {
ExpressSource() {
exists(DataFlow::ParameterNode param |
param.getName() = "req" and
this = param.getAPropertyRead+()
)
}
}
class DangerousSink extends DataFlow::Node {
DangerousSink() {
exists(DataFlow::CallNode call |
call.getCalleeName() = ["exec", "execSync", "query"] and
this = call.getArgument(0)
)
}
}
CodeQL is the most thorough SAST option but also the slowest. I recommend running it on nightly builds or pull request pipelines, not on every commit.
Configuring SAST in Azure Pipelines
Here is a SAST stage that runs ESLint security checks and Semgrep in parallel:
stages:
- stage: SAST
displayName: "Static Security Analysis"
jobs:
- job: ESLintSecurity
displayName: "ESLint Security Scan"
pool:
vmImage: "ubuntu-latest"
steps:
- task: NodeTool@0
inputs:
versionSpec: "18.x"
- script: npm ci
displayName: "Install dependencies"
- script: |
npx eslint . \
--plugin security \
--plugin no-unsanitized \
--format json \
--output-file $(Build.ArtifactStagingDirectory)/eslint-security.json || true
displayName: "Run ESLint security scan"
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: "$(Build.ArtifactStagingDirectory)/eslint-security.json"
artifactName: "sast-eslint"
- job: SemgrepScan
displayName: "Semgrep Scan"
pool:
vmImage: "ubuntu-latest"
steps:
- script: |
python3 -m pip install semgrep
semgrep --config .semgrep.yml \
--config "p/nodejs" \
--config "p/owasp-top-ten" \
--json \
--output $(Build.ArtifactStagingDirectory)/semgrep-results.json \
. || true
displayName: "Run Semgrep scan"
- script: |
node -e "
var fs = require('fs');
var results = JSON.parse(fs.readFileSync('$(Build.ArtifactStagingDirectory)/semgrep-results.json', 'utf8'));
var errors = results.results.filter(function(r) { return r.extra.severity === 'ERROR'; });
console.log('Total findings: ' + results.results.length);
console.log('Critical/Error findings: ' + errors.length);
if (errors.length > 0) {
console.log('##vso[task.logissue type=error]Found ' + errors.length + ' critical SAST findings');
process.exit(1);
}
"
displayName: "Evaluate Semgrep results"
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: "$(Build.ArtifactStagingDirectory)/semgrep-results.json"
artifactName: "sast-semgrep"
condition: always()
The key detail here is the || true after the scan commands. SAST tools often exit with non-zero codes when they find issues, but you want to evaluate the results yourself in a separate step where you can apply severity thresholds.
DAST Tools for Node.js APIs
OWASP ZAP
OWASP ZAP is the gold standard for open-source DAST. It runs as a proxy, spider, and active scanner. For CI/CD, you use ZAP's Docker image and its automation framework.
Create a zap-automation.yaml configuration:
env:
contexts:
- name: "NodeAPI"
urls:
- "http://test-app:3000"
includePaths:
- "http://test-app:3000/api/.*"
excludePaths:
- "http://test-app:3000/health"
parameters:
failOnError: true
failOnWarning: false
progressToStdout: true
jobs:
- type: openapi
parameters:
apiUrl: "http://test-app:3000/api/docs/swagger.json"
targetUrl: "http://test-app:3000"
- type: spider
parameters:
context: "NodeAPI"
maxDuration: 5
maxDepth: 5
- type: activeScan
parameters:
context: "NodeAPI"
maxRuleDurationInMins: 2
maxScanDurationInMins: 30
policyDefinition:
rules:
- id: 40012 # Cross Site Scripting (Reflected)
strength: high
threshold: medium
- id: 40014 # Cross Site Scripting (Persistent)
strength: high
threshold: medium
- id: 40018 # SQL Injection
strength: insane
threshold: low
- id: 40032 # Server Side Request Forgery
strength: high
threshold: medium
- type: report
parameters:
template: "traditional-json"
reportDir: "/zap/reports"
reportFile: "zap-report.json"
Nuclei
Nuclei is a fast template-based scanner that complements ZAP. It excels at finding known misconfigurations, exposed admin panels, and technology-specific vulnerabilities.
# Run Nuclei against your test environment
docker run --rm \
-v $(pwd)/nuclei-results:/output \
projectdiscovery/nuclei:latest \
-u http://test-app:3000 \
-t technologies/ \
-t misconfiguration/ \
-t exposures/ \
-t vulnerabilities/ \
-severity critical,high,medium \
-json-export /output/nuclei-results.json
Configuring DAST in Azure Pipelines
DAST requires a running application, so you need to deploy to a test environment first. Here is a DAST stage that deploys, scans, and tears down:
- stage: DAST
displayName: "Dynamic Security Analysis"
dependsOn: SAST
jobs:
- job: DeployAndScan
displayName: "Deploy Test Environment & DAST Scan"
pool:
vmImage: "ubuntu-latest"
services:
postgres:
image: postgres:15
ports:
- 5432:5432
env:
POSTGRES_PASSWORD: testpassword
POSTGRES_DB: testdb
steps:
- task: NodeTool@0
inputs:
versionSpec: "18.x"
- script: npm ci
displayName: "Install dependencies"
- script: |
export DATABASE_URL="postgres://postgres:testpassword@localhost:5432/testdb"
export NODE_ENV=test
export PORT=3000
npm start &
sleep 10
curl -f http://localhost:3000/health || exit 1
displayName: "Start application"
- script: |
docker run --rm --network host \
-v $(pwd)/zap-automation.yaml:/zap/wrk/automation.yaml \
-v $(Build.ArtifactStagingDirectory):/zap/reports \
ghcr.io/zaproxy/zaproxy:stable \
zap.sh -cmd \
-autorun /zap/wrk/automation.yaml \
-config api.disablekey=true || true
displayName: "Run OWASP ZAP scan"
- script: |
docker run --rm --network host \
-v $(Build.ArtifactStagingDirectory):/output \
projectdiscovery/nuclei:latest \
-u http://localhost:3000 \
-t misconfiguration/ \
-t exposures/ \
-severity critical,high \
-json-export /output/nuclei-results.json || true
displayName: "Run Nuclei scan"
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: "$(Build.ArtifactStagingDirectory)"
artifactName: "dast-results"
condition: always()
The --network host flag is critical here. ZAP and Nuclei run in Docker containers but need to reach the application running on the host network. Without it, localhost:3000 resolves to the container itself, not your app.
IAST: Runtime Analysis Overview
Interactive Application Security Testing (IAST) instruments your application at runtime, watching data flow through actual code paths during testing. IAST agents hook into your Node.js runtime and observe:
- Which user inputs reach which sinks (database queries, file operations, shell commands)
- Whether sanitization functions are applied before dangerous operations
- Runtime type information that SAST cannot determine statically
For Node.js, tools like Contrast Security and Hdiv provide IAST agents. You install them as npm packages and load them before your application starts:
// At the very top of app.js, before any other require
if (process.env.IAST_ENABLED === 'true') {
require('@contrast/agent');
}
var express = require('express');
var app = express();
// ... rest of application
IAST provides the lowest false positive rate of any AST approach because it observes actual runtime behavior. The trade-off is performance overhead (typically 5-15% slower) and the requirement to exercise code paths through testing. If your test suite has low coverage, IAST will miss vulnerabilities in untested code.
I recommend running IAST during your integration test suite in CI/CD, not in production. The performance impact is acceptable in test environments, and your tests provide the code path coverage IAST needs.
Correlating SAST and DAST Findings
The biggest challenge with running both SAST and DAST is dealing with duplicate findings. Both tools might flag the same SQL injection vulnerability, SAST from the code side and DAST from the network side. Without correlation, your team wastes time triaging the same issue twice.
Here is a Node.js correlation tool that maps DAST findings back to SAST findings:
// correlate-findings.js
var fs = require('fs');
var path = require('path');
var crypto = require('crypto');
function loadSemgrepResults(filePath) {
var raw = JSON.parse(fs.readFileSync(filePath, 'utf8'));
return raw.results.map(function(r) {
return {
source: 'semgrep',
id: r.check_id,
severity: normalizeSeverity(r.extra.severity),
file: r.path,
line: r.start.line,
message: r.extra.message,
category: mapToCategory(r.check_id),
fingerprint: generateFingerprint('semgrep', r.check_id, r.path, r.start.line)
};
});
}
function loadZapResults(filePath) {
var raw = JSON.parse(fs.readFileSync(filePath, 'utf8'));
var findings = [];
(raw.site || []).forEach(function(site) {
(site.alerts || []).forEach(function(alert) {
(alert.instances || []).forEach(function(instance) {
findings.push({
source: 'zap',
id: 'zap-' + alert.pluginid,
severity: normalizeSeverity(alert.riskdesc.split(' ')[0]),
url: instance.uri,
method: instance.method,
parameter: instance.param,
message: alert.name,
category: mapZapCategory(alert.pluginid),
fingerprint: generateFingerprint('zap', alert.pluginid, instance.uri, instance.param)
});
});
});
});
return findings;
}
function normalizeSeverity(raw) {
var upper = (raw || '').toUpperCase();
if (upper === 'ERROR' || upper === 'CRITICAL' || upper === 'HIGH') return 'HIGH';
if (upper === 'WARNING' || upper === 'MEDIUM') return 'MEDIUM';
return 'LOW';
}
function mapToCategory(checkId) {
if (checkId.match(/sql/i)) return 'injection';
if (checkId.match(/xss|cross.site/i)) return 'xss';
if (checkId.match(/redirect/i)) return 'redirect';
if (checkId.match(/auth|jwt|session/i)) return 'authentication';
if (checkId.match(/ssrf/i)) return 'ssrf';
if (checkId.match(/path|traversal/i)) return 'path-traversal';
return 'other';
}
function mapZapCategory(pluginId) {
var categories = {
'40012': 'xss', '40014': 'xss', '40016': 'xss',
'40018': 'injection', '40019': 'injection', '40020': 'injection',
'40032': 'ssrf',
'10202': 'redirect',
'10010': 'authentication'
};
return categories[String(pluginId)] || 'other';
}
function generateFingerprint(source, id, location, detail) {
var data = [source, id, location, detail].join(':');
return crypto.createHash('sha256').update(data).digest('hex').substring(0, 16);
}
function correlateFindings(sastFindings, dastFindings) {
var correlated = [];
var used = {};
// Group by category for correlation
var sastByCategory = {};
var dastByCategory = {};
sastFindings.forEach(function(f) {
if (!sastByCategory[f.category]) sastByCategory[f.category] = [];
sastByCategory[f.category].push(f);
});
dastFindings.forEach(function(f) {
if (!dastByCategory[f.category]) dastByCategory[f.category] = [];
dastByCategory[f.category].push(f);
});
// Find overlapping categories - these are confirmed vulnerabilities
Object.keys(sastByCategory).forEach(function(category) {
if (dastByCategory[category]) {
var sastItems = sastByCategory[category];
var dastItems = dastByCategory[category];
sastItems.forEach(function(sast) {
dastItems.forEach(function(dast) {
var correlationId = crypto.randomUUID();
correlated.push({
correlationId: correlationId,
confidence: 'CONFIRMED',
category: category,
severity: 'CRITICAL',
sast: sast,
dast: dast,
recommendation: 'Confirmed by both SAST and DAST. Prioritize immediate fix.'
});
used[sast.fingerprint] = true;
used[dast.fingerprint] = true;
});
});
}
});
// Remaining SAST-only findings
sastFindings.forEach(function(f) {
if (!used[f.fingerprint]) {
correlated.push({
correlationId: crypto.randomUUID(),
confidence: 'PROBABLE',
category: f.category,
severity: f.severity,
sast: f,
dast: null,
recommendation: 'Found by SAST only. Review for false positive.'
});
}
});
// Remaining DAST-only findings
dastFindings.forEach(function(f) {
if (!used[f.fingerprint]) {
correlated.push({
correlationId: crypto.randomUUID(),
confidence: 'CONFIRMED',
category: f.category,
severity: f.severity,
sast: null,
dast: f,
recommendation: 'Confirmed exploitable by DAST. Fix immediately.'
});
}
});
// Sort by priority: CONFIRMED+CRITICAL first
correlated.sort(function(a, b) {
var confOrder = { CONFIRMED: 0, PROBABLE: 1 };
var sevOrder = { CRITICAL: 0, HIGH: 1, MEDIUM: 2, LOW: 3 };
var confDiff = (confOrder[a.confidence] || 2) - (confOrder[b.confidence] || 2);
if (confDiff !== 0) return confDiff;
return (sevOrder[a.severity] || 4) - (sevOrder[b.severity] || 4);
});
return correlated;
}
// Main execution
var semgrepFile = process.argv[2] || 'semgrep-results.json';
var zapFile = process.argv[3] || 'zap-report.json';
var sastFindings = loadSemgrepResults(semgrepFile);
var dastFindings = loadZapResults(zapFile);
console.log('SAST findings: ' + sastFindings.length);
console.log('DAST findings: ' + dastFindings.length);
var results = correlateFindings(sastFindings, dastFindings);
console.log('\n=== Correlated Security Findings ===\n');
console.log('Confirmed (SAST+DAST): ' + results.filter(function(r) { return r.sast && r.dast; }).length);
console.log('SAST only: ' + results.filter(function(r) { return r.sast && !r.dast; }).length);
console.log('DAST only: ' + results.filter(function(r) { return !r.sast && r.dast; }).length);
console.log('Total unique findings: ' + results.length);
results.forEach(function(r, i) {
console.log('\n--- Finding #' + (i + 1) + ' ---');
console.log('Confidence: ' + r.confidence);
console.log('Severity: ' + r.severity);
console.log('Category: ' + r.category);
if (r.sast) console.log('Code: ' + r.sast.file + ':' + r.sast.line);
if (r.dast) console.log('Endpoint: ' + r.dast.method + ' ' + r.dast.url);
console.log('Action: ' + r.recommendation);
});
// Write correlated results
var outputFile = process.argv[4] || 'correlated-findings.json';
fs.writeFileSync(outputFile, JSON.stringify({
summary: {
total: results.length,
confirmed: results.filter(function(r) { return r.confidence === 'CONFIRMED'; }).length,
probable: results.filter(function(r) { return r.confidence === 'PROBABLE'; }).length,
bySeverity: {
critical: results.filter(function(r) { return r.severity === 'CRITICAL'; }).length,
high: results.filter(function(r) { return r.severity === 'HIGH'; }).length,
medium: results.filter(function(r) { return r.severity === 'MEDIUM'; }).length,
low: results.filter(function(r) { return r.severity === 'LOW'; }).length
}
},
findings: results,
generatedAt: new Date().toISOString()
}, null, 2));
console.log('\nCorrelated results written to: ' + outputFile);
Sample output from the correlation tool:
SAST findings: 14
DAST findings: 7
=== Correlated Security Findings ===
Confirmed (SAST+DAST): 3
SAST only: 11
DAST only: 4
Total unique findings: 18
--- Finding #1 ---
Confidence: CONFIRMED
Severity: CRITICAL
Category: injection
Code: src/routes/users.js:47
Endpoint: POST http://localhost:3000/api/users/search
Action: Confirmed by both SAST and DAST. Prioritize immediate fix.
--- Finding #2 ---
Confidence: CONFIRMED
Severity: CRITICAL
Category: xss
Code: src/views/profile.js:23
Endpoint: GET http://localhost:3000/profile?name=<script>alert(1)</script>
Action: Confirmed by both SAST and DAST. Prioritize immediate fix.
--- Finding #3 ---
Confidence: CONFIRMED
Severity: HIGH
Category: authentication
Code: null
Endpoint: GET http://localhost:3000/api/admin/users
Action: Confirmed exploitable by DAST. Fix immediately.
False Positive Management
False positives are the biggest threat to your security scanning program. If developers start ignoring findings because half of them are noise, you have lost the battle. Here is a structured approach to managing false positives.
Create a .security-suppressions.json file tracked in source control:
{
"suppressions": [
{
"id": "suppress-001",
"tool": "semgrep",
"ruleId": "javascript.express.security.audit.express-open-redirect",
"file": "src/routes/auth.js",
"line": 45,
"reason": "Redirect target is validated against allowlist in middleware",
"approvedBy": "shane.larson",
"approvedDate": "2026-01-15",
"expiresDate": "2026-07-15"
},
{
"id": "suppress-002",
"tool": "zap",
"ruleId": "zap-10038",
"url": "/api/docs/*",
"reason": "Swagger UI content security policy is handled by CDN",
"approvedBy": "shane.larson",
"approvedDate": "2026-01-20",
"expiresDate": "2026-04-20"
}
]
}
Build a suppression filter into your pipeline:
// filter-suppressions.js
var fs = require('fs');
function filterSuppressions(findings, suppressionsFile) {
var suppressions = JSON.parse(fs.readFileSync(suppressionsFile, 'utf8')).suppressions;
var now = new Date();
// Remove expired suppressions
var active = suppressions.filter(function(s) {
return new Date(s.expiresDate) > now;
});
var expired = suppressions.length - active.length;
if (expired > 0) {
console.log('WARNING: ' + expired + ' suppression(s) have expired and need review');
}
var filtered = findings.filter(function(finding) {
var suppressed = active.some(function(s) {
if (s.tool !== finding.source) return false;
if (s.ruleId !== finding.id) return false;
if (s.file && finding.file && s.file !== finding.file) return false;
if (s.url && finding.url && !finding.url.match(new RegExp(s.url.replace('*', '.*')))) return false;
return true;
});
if (suppressed) {
console.log('Suppressed: [' + finding.source + '] ' + finding.id + ' in ' + (finding.file || finding.url));
}
return !suppressed;
});
console.log('Findings before suppression: ' + findings.length);
console.log('Active suppressions applied: ' + active.length);
console.log('Findings after suppression: ' + filtered.length);
return filtered;
}
module.exports = { filterSuppressions: filterSuppressions };
Important rules for suppression management:
- Every suppression requires a written justification and an expiration date
- Suppressions expire after 6 months maximum and must be re-reviewed
- Suppressions are tracked in source control so changes go through code review
- Never suppress an entire rule globally; suppress specific file+rule combinations
Severity-Based Pipeline Gating
Not every finding should break the build. You need a gating strategy that blocks deployments for critical issues while letting teams triage lower-severity findings on their own schedule.
// gate-evaluation.js
var fs = require('fs');
var GATE_POLICY = {
// Block the pipeline if any of these thresholds are exceeded
maxCritical: 0,
maxHigh: 0,
maxMedium: 10,
maxLow: 50,
// Confirmed DAST findings always block regardless of severity
blockOnConfirmedDast: true
};
function evaluateGate(correlatedFile, policy) {
var data = JSON.parse(fs.readFileSync(correlatedFile, 'utf8'));
var findings = data.findings;
var violations = [];
// Check severity thresholds
var counts = data.summary.bySeverity;
if (counts.critical > policy.maxCritical) {
violations.push('CRITICAL findings: ' + counts.critical + ' (max: ' + policy.maxCritical + ')');
}
if (counts.high > policy.maxHigh) {
violations.push('HIGH findings: ' + counts.high + ' (max: ' + policy.maxHigh + ')');
}
if (counts.medium > policy.maxMedium) {
violations.push('MEDIUM findings: ' + counts.medium + ' (max: ' + policy.maxMedium + ')');
}
// Check for confirmed DAST findings
if (policy.blockOnConfirmedDast) {
var confirmedDast = findings.filter(function(f) {
return f.dast && f.confidence === 'CONFIRMED';
});
if (confirmedDast.length > 0) {
violations.push('Confirmed DAST vulnerabilities: ' + confirmedDast.length);
}
}
if (violations.length > 0) {
console.log('##vso[task.logissue type=error]Security gate FAILED');
violations.forEach(function(v) {
console.log('##vso[task.logissue type=error] - ' + v);
});
console.log('\nBlocked findings:');
findings.filter(function(f) {
return f.severity === 'CRITICAL' || f.severity === 'HIGH' || (f.dast && f.confidence === 'CONFIRMED');
}).forEach(function(f) {
console.log(' [' + f.severity + '] ' + f.category + ': ' + (f.sast ? f.sast.file + ':' + f.sast.line : f.dast.url));
});
process.exit(1);
}
console.log('Security gate PASSED');
console.log('Findings within acceptable thresholds');
}
evaluateGate(process.argv[2] || 'correlated-findings.json', GATE_POLICY);
Scan Result Tracking Over Time
Tracking findings over time is essential for measuring your security posture and demonstrating improvement. Store scan summaries in your database or as pipeline artifacts that can be queried.
// track-results.js
var fs = require('fs');
var path = require('path');
function trackResults(correlatedFile, historyFile) {
var data = JSON.parse(fs.readFileSync(correlatedFile, 'utf8'));
var history = [];
if (fs.existsSync(historyFile)) {
history = JSON.parse(fs.readFileSync(historyFile, 'utf8'));
}
var entry = {
timestamp: new Date().toISOString(),
buildId: process.env.BUILD_BUILDID || 'local',
branch: process.env.BUILD_SOURCEBRANCH || 'unknown',
commit: process.env.BUILD_SOURCEVERSION || 'unknown',
summary: data.summary,
newFindings: 0,
resolvedFindings: 0
};
// Compare with previous scan to find new and resolved findings
if (history.length > 0) {
var previous = history[history.length - 1];
var prevFingerprints = {};
var currFingerprints = {};
if (previous.fingerprints) {
previous.fingerprints.forEach(function(fp) { prevFingerprints[fp] = true; });
}
var currentFingerprints = data.findings.map(function(f) {
return f.sast ? f.sast.fingerprint : f.dast.fingerprint;
});
currentFingerprints.forEach(function(fp) { currFingerprints[fp] = true; });
entry.newFindings = currentFingerprints.filter(function(fp) { return !prevFingerprints[fp]; }).length;
entry.resolvedFindings = Object.keys(prevFingerprints).filter(function(fp) { return !currFingerprints[fp]; }).length;
entry.fingerprints = currentFingerprints;
} else {
entry.fingerprints = data.findings.map(function(f) {
return f.sast ? f.sast.fingerprint : f.dast.fingerprint;
});
entry.newFindings = entry.fingerprints.length;
}
history.push(entry);
fs.writeFileSync(historyFile, JSON.stringify(history, null, 2));
console.log('Scan tracked. New: ' + entry.newFindings + ', Resolved: ' + entry.resolvedFindings);
console.log('Total scans in history: ' + history.length);
// Trend analysis
if (history.length >= 5) {
var recent = history.slice(-5);
var trend = recent.map(function(h) { return h.summary.total; });
var direction = trend[trend.length - 1] < trend[0] ? 'IMPROVING' : 'DEGRADING';
console.log('5-scan trend: ' + trend.join(' -> ') + ' (' + direction + ')');
}
}
trackResults(
process.argv[2] || 'correlated-findings.json',
process.argv[3] || 'scan-history.json'
);
Integrating with Azure Boards
When your pipeline finds vulnerabilities, automatically create Azure Boards work items so nothing falls through the cracks:
- stage: TrackFindings
displayName: "Create Work Items for Findings"
dependsOn: SecurityGate
condition: always()
jobs:
- job: CreateWorkItems
pool:
vmImage: "ubuntu-latest"
steps:
- task: DownloadBuildArtifacts@0
inputs:
artifactName: "security-correlated"
- script: |
node create-work-items.js \
"$(System.AccessToken)" \
"$(System.CollectionUri)" \
"$(System.TeamProject)" \
"$(Build.ArtifactStagingDirectory)/security-correlated/correlated-findings.json"
displayName: "Create Azure Boards work items"
// create-work-items.js
var https = require('https');
var fs = require('fs');
var url = require('url');
var token = process.argv[2];
var orgUrl = process.argv[3];
var project = process.argv[4];
var findingsFile = process.argv[5];
var data = JSON.parse(fs.readFileSync(findingsFile, 'utf8'));
function createWorkItem(finding, callback) {
var title = '[Security] ' + finding.severity + ' - ' + finding.category;
if (finding.sast) {
title += ' in ' + finding.sast.file;
}
var description = '<h3>Security Finding</h3>';
description += '<p><strong>Confidence:</strong> ' + finding.confidence + '</p>';
description += '<p><strong>Severity:</strong> ' + finding.severity + '</p>';
description += '<p><strong>Category:</strong> ' + finding.category + '</p>';
if (finding.sast) {
description += '<p><strong>File:</strong> ' + finding.sast.file + ':' + finding.sast.line + '</p>';
description += '<p><strong>SAST Rule:</strong> ' + finding.sast.id + '</p>';
}
if (finding.dast) {
description += '<p><strong>Endpoint:</strong> ' + finding.dast.method + ' ' + finding.dast.url + '</p>';
description += '<p><strong>Parameter:</strong> ' + (finding.dast.parameter || 'N/A') + '</p>';
}
description += '<p><strong>Recommendation:</strong> ' + finding.recommendation + '</p>';
var patchDoc = [
{ op: 'add', path: '/fields/System.Title', value: title },
{ op: 'add', path: '/fields/System.Description', value: description },
{ op: 'add', path: '/fields/Microsoft.VSTS.Common.Priority', value: finding.severity === 'CRITICAL' ? 1 : 2 },
{ op: 'add', path: '/fields/System.Tags', value: 'security;automated;' + finding.category }
];
var parsed = url.parse(orgUrl);
var options = {
hostname: parsed.hostname,
path: '/' + project + '/_apis/wit/workitems/$Bug?api-version=7.0',
method: 'POST',
headers: {
'Content-Type': 'application/json-patch+json',
'Authorization': 'Basic ' + Buffer.from(':' + token).toString('base64')
}
};
var req = https.request(options, function(res) {
var body = '';
res.on('data', function(chunk) { body += chunk; });
res.on('end', function() {
if (res.statusCode === 200) {
var item = JSON.parse(body);
console.log('Created work item #' + item.id + ': ' + title);
} else {
console.error('Failed to create work item: ' + res.statusCode + ' ' + body);
}
callback();
});
});
req.write(JSON.stringify(patchDoc));
req.end();
}
// Only create work items for HIGH and CRITICAL findings
var actionable = data.findings.filter(function(f) {
return f.severity === 'CRITICAL' || f.severity === 'HIGH';
});
console.log('Creating ' + actionable.length + ' work items for high-severity findings');
var index = 0;
function next() {
if (index < actionable.length) {
createWorkItem(actionable[index], function() {
index++;
next();
});
}
}
next();
Shift-Left Security Culture
Tooling only works if your team actually uses it. Here are the practices that make security scanning stick:
Pre-commit hooks. Run the fastest SAST checks before code even reaches the pipeline. Install eslint-plugin-security as a pre-commit hook through Husky:
{
"husky": {
"hooks": {
"pre-commit": "npx eslint --plugin security --rule 'security/detect-eval-with-expression: error' --rule 'security/detect-child-process: error' $(git diff --cached --name-only --diff-filter=ACM | grep '.js$')"
}
}
}
IDE integration. Semgrep has VS Code and IntelliJ plugins. When developers see security findings in their editor as they type, they fix issues before committing. This is cheaper than fixing them in the pipeline.
Security champions. Designate one developer per team who receives security training, reviews suppressions, and triages findings. This person is not a security engineer; they are a developer who bridges the gap.
Blameless triage. Never attach a developer's name to a vulnerability count. Track findings by component, not by author. The goal is to fix the code, not shame the developer.
Complete Working Example: Full Pipeline
Here is the complete Azure DevOps pipeline combining everything above:
trigger:
branches:
include:
- main
- release/*
pr:
branches:
include:
- main
variables:
nodeVersion: "18.x"
testAppPort: 3000
stages:
# Stage 1: Build and Unit Test
- stage: Build
jobs:
- job: BuildAndTest
pool:
vmImage: "ubuntu-latest"
steps:
- task: NodeTool@0
inputs:
versionSpec: $(nodeVersion)
- script: npm ci
- script: npm test
- script: npm audit --audit-level=high || true
displayName: "npm audit"
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: "$(System.DefaultWorkingDirectory)"
artifactName: "app"
# Stage 2: SAST
- stage: SAST
dependsOn: Build
jobs:
- job: ESLintSecurity
pool:
vmImage: "ubuntu-latest"
steps:
- task: DownloadBuildArtifacts@0
inputs:
artifactName: "app"
- script: |
cd $(Build.ArtifactStagingDirectory)/app
npx eslint . --plugin security --format json \
--output-file $(Build.ArtifactStagingDirectory)/eslint-security.json || true
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: "$(Build.ArtifactStagingDirectory)/eslint-security.json"
artifactName: "sast-eslint"
- job: SemgrepScan
pool:
vmImage: "ubuntu-latest"
steps:
- script: |
pip install semgrep
semgrep --config .semgrep.yml --config "p/nodejs" --config "p/owasp-top-ten" \
--json --output $(Build.ArtifactStagingDirectory)/semgrep-results.json . || true
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: "$(Build.ArtifactStagingDirectory)/semgrep-results.json"
artifactName: "sast-semgrep"
condition: always()
# Stage 3: DAST
- stage: DAST
dependsOn: Build
jobs:
- job: DynamicScan
pool:
vmImage: "ubuntu-latest"
steps:
- task: DownloadBuildArtifacts@0
inputs:
artifactName: "app"
- script: |
cd $(Build.ArtifactStagingDirectory)/app
npm ci
NODE_ENV=test PORT=$(testAppPort) npm start &
sleep 15
curl -f http://localhost:$(testAppPort)/health
displayName: "Deploy test environment"
- script: |
docker run --rm --network host \
-v $(pwd)/zap-automation.yaml:/zap/wrk/automation.yaml \
-v $(Build.ArtifactStagingDirectory):/zap/reports \
ghcr.io/zaproxy/zaproxy:stable \
zap.sh -cmd -autorun /zap/wrk/automation.yaml || true
displayName: "OWASP ZAP scan"
- script: |
docker run --rm --network host \
-v $(Build.ArtifactStagingDirectory):/output \
projectdiscovery/nuclei:latest \
-u http://localhost:$(testAppPort) \
-t misconfiguration/ -t exposures/ \
-severity critical,high \
-json-export /output/nuclei-results.json || true
displayName: "Nuclei scan"
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: "$(Build.ArtifactStagingDirectory)"
artifactName: "dast-results"
condition: always()
# Stage 4: Correlate and Gate
- stage: SecurityGate
dependsOn:
- SAST
- DAST
jobs:
- job: CorrelateAndGate
pool:
vmImage: "ubuntu-latest"
steps:
- task: DownloadBuildArtifacts@0
inputs:
artifactName: "sast-semgrep"
- task: DownloadBuildArtifacts@0
inputs:
artifactName: "dast-results"
- script: |
node correlate-findings.js \
$(Build.ArtifactStagingDirectory)/sast-semgrep/semgrep-results.json \
$(Build.ArtifactStagingDirectory)/dast-results/zap-report.json \
$(Build.ArtifactStagingDirectory)/correlated-findings.json
displayName: "Correlate SAST and DAST findings"
- script: |
node filter-suppressions.js \
$(Build.ArtifactStagingDirectory)/correlated-findings.json
displayName: "Apply suppressions"
- script: |
node gate-evaluation.js \
$(Build.ArtifactStagingDirectory)/correlated-findings.json
displayName: "Evaluate security gate"
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: "$(Build.ArtifactStagingDirectory)/correlated-findings.json"
artifactName: "security-correlated"
condition: always()
# Stage 5: Deploy (only if gate passes)
- stage: Deploy
dependsOn: SecurityGate
condition: succeeded()
jobs:
- job: DeployProduction
pool:
vmImage: "ubuntu-latest"
steps:
- script: echo "Deploying to production..."
displayName: "Deploy"
Common Issues and Troubleshooting
1. ZAP Container Cannot Reach Application
ERROR: Failed to connect to localhost:3000
HttpHostConnectException: Connect to localhost:3000 refused
This happens when ZAP runs in a Docker container and tries to reach localhost, which resolves to the container itself. Fix it by using --network host on Linux agents, or by using the Docker bridge IP (172.17.0.1) on Mac/Windows agents:
- script: |
DOCKER_HOST_IP=$(ip route | grep docker0 | awk '{print $9}')
sed -i "s/localhost/$DOCKER_HOST_IP/g" zap-automation.yaml
docker run --rm -v $(pwd)/zap-automation.yaml:/zap/wrk/automation.yaml ...
2. Semgrep Out of Memory on Large Codebases
semgrep: error: Segmentation fault (core dumped)
# or
MemoryError: Unable to allocate memory
Semgrep loads the entire AST into memory. For large Node.js monorepos, exclude node_modules and test fixtures:
semgrep --config "p/nodejs" \
--exclude "node_modules" \
--exclude "*.test.js" \
--exclude "*.spec.js" \
--exclude "fixtures" \
--max-memory 4096 \
--timeout 300 \
.
3. ESLint Security Plugin False Positives on Array Bracket Notation
warning Generic Object Injection Sink security/detect-object-injection
var value = config[key];
The detect-object-injection rule flags every bracket notation access, which produces massive noise. Either disable it or use inline suppression for verified-safe cases:
// eslint-disable-next-line security/detect-object-injection
var value = config[key];
For a project-wide fix, set it to off in .eslintrc.json and rely on Semgrep's more precise taint tracking to catch actual injection vulnerabilities.
4. ZAP Active Scan Takes Too Long in Pipeline
##vso[task.logissue type=warning]Task exceeded timeout (60 minutes)
ZAP's active scanner can run for hours on large applications. Constrain it by limiting the scan policy, reducing attack strength, and setting hard time limits:
# In zap-automation.yaml
- type: activeScan
parameters:
maxScanDurationInMins: 15
maxRuleDurationInMins: 2
threadPerHost: 2
policyDefinition:
defaultStrength: medium
defaultThreshold: medium
For pull request pipelines, use ZAP's baseline scan instead of the full active scan. The baseline scan only checks for passive findings and completes in under 5 minutes:
docker run --rm --network host ghcr.io/zaproxy/zaproxy:stable \
zap-baseline.py -t http://localhost:3000 -J baseline-report.json
5. Pipeline Fails with Azure Boards 403 on Work Item Creation
Failed to create work item: 403 {"message":"The current user does not have permissions to create work items in this project."}
The $(System.AccessToken) must have "Create work items" permission. In Azure DevOps, go to Project Settings > Permissions > Build Service account and grant "Create work items" permission. Also ensure the pipeline YAML has:
jobs:
- job: CreateWorkItems
pool:
vmImage: "ubuntu-latest"
# This is required for System.AccessToken to have write permissions
workspace:
clean: all
And in the pipeline settings UI, enable "Allow scripts to access the OAuth token."
Best Practices
Run SAST on every commit, DAST on every merge to main. SAST is fast enough for every push. DAST takes longer and benefits from a stable codebase. Running DAST on feature branches adds 20-30 minutes to every pipeline without proportional value.
Never gate on SAST alone for blocking deployments. SAST false positive rates make it unreliable as a hard gate. Use SAST findings as warnings and only gate on confirmed DAST findings or SAST findings that are corroborated by DAST.
Treat your suppression file as security-critical code. Require at least two reviewers for any changes to
.security-suppressions.json. A casual suppression can hide a real vulnerability for months.Set suppression expiration dates aggressively. Three months is reasonable for most suppressions. If a finding is still suppressed after two renewals, either fix the underlying issue or document why it is permanently acceptable.
Separate scan tooling from scan policy. Keep your ZAP and Semgrep configurations in the repository, but define gating thresholds in pipeline variables. This lets security teams adjust thresholds without modifying application code.
Baseline your findings before enforcing gates. When adding security scanning to an existing project, start with all gates in warning-only mode. Triage existing findings over 2-4 sprints, suppress known false positives, fix genuine issues, and then enable blocking gates. Turning on hard gates day one generates developer backlash and suppression abuse.
Run DAST against a production-like environment, not a mock. DAST against a test environment with stubbed services misses configuration-level vulnerabilities. Use a staging environment with real infrastructure but synthetic data.
Track mean-time-to-remediate (MTTR) by severity. Your target should be under 24 hours for critical, under 1 week for high, and under 1 sprint for medium. If MTTR is climbing, you either have too many findings (tune your tools) or not enough developer time allocated to security fixes.