Dependency Vulnerability Scanning
Complete guide to dependency vulnerability scanning in Azure DevOps pipelines, covering npm audit, Snyk, OWASP Dependency-Check, license compliance, automated remediation workflows, and building vulnerability management dashboards.
Dependency Vulnerability Scanning
Overview
Every modern application depends on hundreds of open source packages, and each one of those packages is a potential entry point for attackers. Dependency vulnerability scanning catches known vulnerabilities in your supply chain before they reach production. I have watched teams discover critical CVEs in their dependency trees only after a security incident — not because scanners did not exist, but because nobody bothered to wire them into the pipeline.
Prerequisites
- Azure DevOps project with Azure Pipelines configured
- Node.js 16 or later for build agents
- NPM, Yarn, or pnpm as your package manager
- Snyk account (free tier available) for advanced scanning
- Basic understanding of CVE scoring (CVSS) and vulnerability databases
- Personal Access Token for Azure DevOps API integration
Understanding Dependency Vulnerabilities
The Supply Chain Problem
A typical Express.js application installs 300-500 packages through transitive dependencies. Running npm ls --all | wc -l reveals the true scope:
npm ls --all 2>/dev/null | wc -l
# Output: 487
Each of those 487 packages has its own maintainers, release cadence, and security history. When one of them has a vulnerability, you inherit it.
CVSS Scoring and Severity Levels
Critical (9.0-10.0): Remote code execution, no authentication required
High (7.0-8.9): Significant impact, may require some conditions
Medium (4.0-6.9): Limited impact or requires specific conditions
Low (0.1-3.9): Minimal impact, hard to exploit
Types of Dependency Vulnerabilities
var fs = require("fs");
// Common vulnerability types in Node.js dependencies:
// 1. Prototype Pollution - attacker modifies Object.prototype
// Example: lodash < 4.17.12
var payload = JSON.parse('{"__proto__": {"admin": true}}');
// 2. Regular Expression Denial of Service (ReDoS)
// Example: validator < 13.7.0
var evilInput = "a".repeat(100000) + "!";
// 3. Path Traversal - accessing files outside intended directory
// Example: serve-static < 1.14.2
var maliciousPath = "../../../../etc/passwd";
// 4. Cross-Site Scripting (XSS) through template injection
// Example: pug < 3.0.1
var userInput = "#{global.process.mainModule.require('child_process').execSync('whoami')}";
// 5. SQL Injection through ORM bypass
// Example: sequelize < 6.28.0
var query = { where: { id: { toString: function() { return "1; DROP TABLE users;--"; } } } };
npm audit: Built-In Scanning
npm audit is the simplest starting point. It is built into npm and requires no additional tools.
Basic Pipeline Integration
# azure-pipelines.yml
stages:
- stage: SecurityScan
jobs:
- job: NpmAudit
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool@0
inputs:
versionSpec: '20.x'
- script: npm ci --ignore-scripts
displayName: 'Install dependencies (skip scripts for safety)'
- script: |
npm audit --json > $(Build.ArtifactStagingDirectory)/npm-audit.json 2>&1 || true
displayName: 'Run npm audit'
- script: |
node -e "
var fs = require('fs');
var report = JSON.parse(fs.readFileSync('$(Build.ArtifactStagingDirectory)/npm-audit.json', 'utf8'));
var vulns = report.vulnerabilities || {};
var counts = { critical: 0, high: 0, moderate: 0, low: 0 };
var details = [];
Object.keys(vulns).forEach(function(pkg) {
var v = vulns[pkg];
counts[v.severity] = (counts[v.severity] || 0) + 1;
if (v.severity === 'critical' || v.severity === 'high') {
details.push({
package: pkg,
severity: v.severity,
title: v.via && v.via[0] && v.via[0].title ? v.via[0].title : 'Unknown',
range: v.range || 'unknown',
fixAvailable: v.fixAvailable ? true : false
});
}
});
console.log('npm audit results:');
console.log(' Critical: ' + counts.critical);
console.log(' High: ' + counts.high);
console.log(' Moderate: ' + counts.moderate);
console.log(' Low: ' + counts.low);
if (details.length > 0) {
console.log('\\nCritical/High findings:');
details.forEach(function(d) {
console.log(' [' + d.severity.toUpperCase() + '] ' + d.package + ': ' + d.title);
console.log(' Affected: ' + d.range + ' | Fix available: ' + d.fixAvailable);
});
}
if (counts.critical > 0) {
console.log('##vso[task.complete result=Failed;]Critical vulnerabilities found');
process.exit(1);
}
"
displayName: 'Evaluate audit results'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Build.ArtifactStagingDirectory)/npm-audit.json'
artifactName: 'npm-audit-report'
condition: always()
Production-Only Scanning
Dev dependencies do not ship to production. Scanning only production dependencies reduces noise:
var exec = require("child_process").execSync;
var fs = require("fs");
// Run audit for production dependencies only
function auditProductionDeps() {
var auditOutput;
try {
auditOutput = exec("npm audit --omit=dev --json", { encoding: "utf8" });
} catch (e) {
auditOutput = e.stdout; // npm audit exits non-zero when vulns found
}
var report = JSON.parse(auditOutput);
var vulns = report.vulnerabilities || {};
// Filter to only production dependencies
var prodVulns = {};
Object.keys(vulns).forEach(function(pkg) {
var v = vulns[pkg];
// Check if any path to this vulnerability goes through a prod dependency
if (!v.effects || v.effects.length > 0 || v.isDirect) {
prodVulns[pkg] = v;
}
});
console.log("Production vulnerabilities: " + Object.keys(prodVulns).length);
console.log("Dev-only vulnerabilities: " + (Object.keys(vulns).length - Object.keys(prodVulns).length));
return prodVulns;
}
var results = auditProductionDeps();
Object.keys(results).forEach(function(pkg) {
console.log(" " + pkg + " (" + results[pkg].severity + "): " + results[pkg].range);
});
Snyk: Advanced Dependency Analysis
Snyk provides deeper analysis than npm audit — reachability analysis, fix PRs, license compliance, and a vulnerability database that often has entries before the npm advisory database.
Pipeline Integration
steps:
- script: npm install -g snyk
displayName: 'Install Snyk CLI'
- script: snyk auth $(SNYK_TOKEN)
displayName: 'Authenticate Snyk'
env:
SNYK_TOKEN: $(SNYK_TOKEN)
- script: |
snyk test \
--json \
--severity-threshold=medium \
--project-name="$(Build.Repository.Name)" \
> $(Build.ArtifactStagingDirectory)/snyk-results.json || true
displayName: 'Run Snyk test'
env:
SNYK_TOKEN: $(SNYK_TOKEN)
- script: |
snyk monitor \
--project-name="$(Build.Repository.Name)" \
--org=$(SNYK_ORG)
displayName: 'Upload to Snyk dashboard'
env:
SNYK_TOKEN: $(SNYK_TOKEN)
Snyk Results Processing
var fs = require("fs");
function processSnykResults(resultsFile) {
var results = JSON.parse(fs.readFileSync(resultsFile, "utf8"));
if (!results.vulnerabilities) {
console.log("No vulnerabilities found");
return { pass: true, total: 0 };
}
var byType = {};
var fixable = 0;
var unfixable = 0;
var exploitable = [];
results.vulnerabilities.forEach(function(v) {
byType[v.severity] = (byType[v.severity] || 0) + 1;
if (v.isUpgradable || v.isPatchable) {
fixable++;
} else {
unfixable++;
}
// Check for known exploits
if (v.exploit && v.exploit !== "Not Defined") {
exploitable.push({
package: v.packageName,
version: v.version,
severity: v.severity,
exploit: v.exploit,
title: v.title
});
}
});
console.log("Snyk Vulnerability Report:");
console.log(" Total: " + results.vulnerabilities.length);
Object.keys(byType).forEach(function(sev) {
console.log(" " + sev + ": " + byType[sev]);
});
console.log(" Fixable: " + fixable);
console.log(" No fix available: " + unfixable);
if (exploitable.length > 0) {
console.log("\n EXPLOITABLE VULNERABILITIES:");
exploitable.forEach(function(e) {
console.log(" [" + e.severity.toUpperCase() + "] " + e.package + "@" + e.version);
console.log(" " + e.title);
console.log(" Exploit maturity: " + e.exploit);
});
}
// Show upgrade paths for fixable issues
var upgrades = {};
results.vulnerabilities.forEach(function(v) {
if (v.isUpgradable && v.upgradePath && v.upgradePath.length > 1) {
var directDep = v.upgradePath[1];
if (!upgrades[directDep]) {
upgrades[directDep] = [];
}
upgrades[directDep].push(v.title);
}
});
if (Object.keys(upgrades).length > 0) {
console.log("\n Recommended Upgrades:");
Object.keys(upgrades).forEach(function(pkg) {
console.log(" " + pkg + " (fixes " + upgrades[pkg].length + " issue(s))");
});
}
var critical = byType.critical || 0;
var high = byType.high || 0;
return {
pass: critical === 0 && high <= 3,
total: results.vulnerabilities.length,
critical: critical,
high: high,
exploitable: exploitable.length
};
}
var result = processSnykResults(process.argv[2] || "snyk-results.json");
if (!result.pass) {
console.log("\n##vso[task.complete result=Failed;]Vulnerability threshold exceeded");
process.exit(1);
}
OWASP Dependency-Check
OWASP Dependency-Check is language-agnostic and works with Java, .NET, Node.js, Python, and more. It uses the National Vulnerability Database (NVD) directly.
Azure Pipelines Task
steps:
- task: dependency-check-build-task@6
displayName: 'OWASP Dependency-Check'
inputs:
projectName: '$(Build.Repository.Name)'
scanPath: '$(Build.SourcesDirectory)'
format: 'HTML,JSON,SARIF'
failOnCVSS: '7'
suppressionPath: '$(Build.SourcesDirectory)/.dependency-check-suppression.xml'
enableRetired: true
enableExperimental: true
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: '$(Common.TestResultsDirectory)/dependency-check'
artifactName: 'owasp-dependency-check'
condition: always()
Suppression File for False Positives
<?xml version="1.0" encoding="UTF-8"?>
<suppressions xmlns="https://jeremylong.github.io/DependencyCheck/dependency-suppression.1.3.xsd">
<!-- Test-only dependency, not deployed to production -->
<suppress>
<notes>jest is dev-only, CVE does not apply to production</notes>
<packageUrl regex="true">^pkg:npm/jest@.*$</packageUrl>
<cve>CVE-2024-55555</cve>
</suppress>
<!-- False positive: CVE applies to Java implementation, not JavaScript -->
<suppress>
<notes>This CVE targets the Java XML parser, not the JS package of the same name</notes>
<packageUrl regex="true">^pkg:npm/xml-parser@.*$</packageUrl>
<vulnerabilityName regex="true">^CVE-2024-\d+$</vulnerabilityName>
</suppress>
<!-- Accepted risk with mitigation in place -->
<suppress until="2026-06-01Z">
<notes>No fix available. Mitigated by input validation middleware. Review by June 2026.</notes>
<packageUrl regex="true">^pkg:npm/legacy-parser@1\.2\.\d+$</packageUrl>
<cve>CVE-2025-12345</cve>
</suppress>
</suppressions>
License Compliance Scanning
Dependency vulnerabilities are not the only risk. License incompatibilities can create legal liability.
var exec = require("child_process").execSync;
var fs = require("fs");
// License compliance checker
var ALLOWED_LICENSES = [
"MIT", "ISC", "BSD-2-Clause", "BSD-3-Clause", "Apache-2.0",
"0BSD", "CC0-1.0", "Unlicense", "BlueOak-1.0.0"
];
var RESTRICTED_LICENSES = [
"GPL-2.0", "GPL-3.0", "AGPL-3.0", "LGPL-2.1", "LGPL-3.0",
"SSPL-1.0", "BSL-1.1", "EUPL-1.1", "MPL-2.0"
];
function checkLicenses() {
var output = exec("npm ls --all --json 2>/dev/null", { encoding: "utf8" });
var tree = JSON.parse(output);
var violations = [];
var warnings = [];
var unknown = [];
function checkPackage(name, info, depth) {
if (!info || depth > 10) return;
// Read package.json for license info
var pkgPath = "node_modules/" + name + "/package.json";
try {
var pkg = JSON.parse(fs.readFileSync(pkgPath, "utf8"));
var license = pkg.license || "UNKNOWN";
if (typeof license === "object") {
license = license.type || "UNKNOWN";
}
if (RESTRICTED_LICENSES.indexOf(license) !== -1) {
violations.push({ package: name, version: info.version, license: license });
} else if (ALLOWED_LICENSES.indexOf(license) === -1 && license !== "UNKNOWN") {
warnings.push({ package: name, version: info.version, license: license });
} else if (license === "UNKNOWN") {
unknown.push({ package: name, version: info.version });
}
} catch (e) {
// Package might be hoisted or not installed
}
// Check sub-dependencies
if (info.dependencies) {
Object.keys(info.dependencies).forEach(function(dep) {
checkPackage(dep, info.dependencies[dep], depth + 1);
});
}
}
if (tree.dependencies) {
Object.keys(tree.dependencies).forEach(function(dep) {
checkPackage(dep, tree.dependencies[dep], 0);
});
}
console.log("License Compliance Report:");
console.log(" Violations (restricted): " + violations.length);
console.log(" Warnings (unknown allowed status): " + warnings.length);
console.log(" Unknown license: " + unknown.length);
if (violations.length > 0) {
console.log("\nRestricted License Violations:");
violations.forEach(function(v) {
console.log(" " + v.package + "@" + v.version + " [" + v.license + "]");
});
}
if (warnings.length > 0) {
console.log("\nLicense Warnings (review required):");
warnings.forEach(function(w) {
console.log(" " + w.package + "@" + w.version + " [" + w.license + "]");
});
}
return { violations: violations, warnings: warnings, unknown: unknown };
}
var results = checkLicenses();
if (results.violations.length > 0) {
console.log("##vso[task.complete result=Failed;]License compliance violations found");
process.exit(1);
}
Automated Remediation Workflows
Scanning finds problems. Automation fixes them.
Auto-Fix Pull Requests
var exec = require("child_process").execSync;
var https = require("https");
var fs = require("fs");
// Automated dependency update and PR creation
function createRemediationPR(organization, project, repository, pat) {
// Run npm audit fix in dry-run mode first
var dryRun;
try {
dryRun = exec("npm audit fix --dry-run --json", { encoding: "utf8" });
} catch (e) {
dryRun = e.stdout;
}
var fixResult = JSON.parse(dryRun);
if (!fixResult.added && !fixResult.removed && !fixResult.changed) {
console.log("No automatic fixes available");
return Promise.resolve(null);
}
console.log("Available fixes:");
console.log(" Added: " + (fixResult.added || 0));
console.log(" Removed: " + (fixResult.removed || 0));
console.log(" Changed: " + (fixResult.changed || 0));
// Apply the fixes
exec("npm audit fix");
// Check what changed
var diff = exec("git diff package-lock.json", { encoding: "utf8" });
if (!diff) {
console.log("No changes to commit");
return Promise.resolve(null);
}
var branchName = "security/dependency-update-" + new Date().toISOString().slice(0, 10);
// Create branch and commit
exec("git checkout -b " + branchName);
exec("git add package.json package-lock.json");
exec('git commit -m "fix: update dependencies to resolve vulnerabilities"');
exec("git push origin " + branchName);
// Create PR via Azure DevOps API
var prData = JSON.stringify({
sourceRefName: "refs/heads/" + branchName,
targetRefName: "refs/heads/main",
title: "Security: Automated dependency vulnerability fixes",
description: "## Automated Security Update\n\n"
+ "This PR was created automatically by the dependency scanning pipeline.\n\n"
+ "### Changes\n"
+ "- Updated vulnerable dependencies via `npm audit fix`\n"
+ "- Added: " + (fixResult.added || 0) + " packages\n"
+ "- Removed: " + (fixResult.removed || 0) + " packages\n"
+ "- Changed: " + (fixResult.changed || 0) + " packages\n\n"
+ "### Review Checklist\n"
+ "- [ ] Verify tests pass\n"
+ "- [ ] Check for breaking changes in updated packages\n"
+ "- [ ] Confirm no new vulnerabilities introduced\n"
});
var options = {
hostname: "dev.azure.com",
path: "/" + organization + "/" + project + "/_apis/git/repositories/" + repository + "/pullrequests?api-version=7.1",
method: "POST",
headers: {
"Authorization": "Basic " + Buffer.from(":" + pat).toString("base64"),
"Content-Type": "application/json",
"Content-Length": Buffer.byteLength(prData)
}
};
return new Promise(function(resolve, reject) {
var req = https.request(options, function(res) {
var body = "";
res.on("data", function(chunk) { body += chunk; });
res.on("end", function() {
var pr = JSON.parse(body);
console.log("Created PR #" + pr.pullRequestId + ": " + pr.title);
resolve(pr);
});
});
req.on("error", reject);
req.write(prData);
req.end();
});
}
Scheduled Vulnerability Monitoring
# Scheduled pipeline for continuous vulnerability monitoring
trigger: none
schedules:
- cron: '0 6 * * 1' # Every Monday at 6 AM UTC
displayName: 'Weekly vulnerability scan'
branches:
include: [main]
always: true
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool@0
inputs:
versionSpec: '20.x'
- script: npm ci --ignore-scripts
displayName: 'Install dependencies'
- script: |
npm audit --json > audit.json 2>&1 || true
node scripts/vulnerability-report.js audit.json
displayName: 'Generate vulnerability report'
- script: |
node scripts/send-vulnerability-alert.js
displayName: 'Send alerts for new vulnerabilities'
condition: and(succeeded(), eq(variables['hasNewVulns'], 'true'))
env:
WEBHOOK_URL: $(TEAMS_WEBHOOK_URL)
Complete Working Example: Vulnerability Management Dashboard
var fs = require("fs");
var https = require("https");
var exec = require("child_process").execSync;
// ============================================================
// Dependency Vulnerability Management Tool
// Scans, reports, tracks, and helps remediate vulnerabilities
// ============================================================
var config = {
organization: process.env.ADO_ORG,
project: process.env.ADO_PROJECT,
pat: process.env.ADO_PAT,
thresholds: {
critical: 0,
high: 2,
medium: 10
}
};
// Run npm audit and parse results
function runAudit() {
var output;
try {
output = exec("npm audit --json", { encoding: "utf8" });
} catch (e) {
output = e.stdout;
}
var report = JSON.parse(output);
var vulns = report.vulnerabilities || {};
var results = [];
Object.keys(vulns).forEach(function(pkg) {
var v = vulns[pkg];
var advisory = v.via && v.via[0];
results.push({
package: pkg,
severity: v.severity,
range: v.range,
fixAvailable: !!v.fixAvailable,
isDirect: v.isDirect || false,
title: advisory && advisory.title ? advisory.title : "Unknown",
url: advisory && advisory.url ? advisory.url : null,
cwe: advisory && advisory.cwe ? advisory.cwe : [],
cvss: advisory && advisory.cvss ? advisory.cvss.score : null
});
});
// Sort by severity then CVSS score
var severityOrder = { critical: 0, high: 1, moderate: 2, low: 3, info: 4 };
results.sort(function(a, b) {
var sevDiff = (severityOrder[a.severity] || 4) - (severityOrder[b.severity] || 4);
if (sevDiff !== 0) return sevDiff;
return (b.cvss || 0) - (a.cvss || 0);
});
return results;
}
// Compare with previous scan to find new vulnerabilities
function findNewVulnerabilities(current, previousFile) {
if (!fs.existsSync(previousFile)) {
return current; // First scan, all are new
}
var previous = JSON.parse(fs.readFileSync(previousFile, "utf8"));
var previousKeys = {};
previous.forEach(function(v) {
previousKeys[v.package + "@" + v.range] = true;
});
return current.filter(function(v) {
return !previousKeys[v.package + "@" + v.range];
});
}
// Generate HTML dashboard
function generateDashboard(vulnerabilities, outputFile) {
var counts = { critical: 0, high: 0, moderate: 0, low: 0 };
var fixableCount = 0;
var directCount = 0;
vulnerabilities.forEach(function(v) {
counts[v.severity] = (counts[v.severity] || 0) + 1;
if (v.fixAvailable) fixableCount++;
if (v.isDirect) directCount++;
});
var html = '<!DOCTYPE html>\n<html>\n<head>\n'
+ '<title>Dependency Vulnerability Dashboard</title>\n'
+ '<style>\n'
+ 'body { font-family: -apple-system, sans-serif; max-width: 1000px; margin: 0 auto; padding: 20px; background: #f5f5f5; }\n'
+ '.cards { display: grid; grid-template-columns: repeat(4, 1fr); gap: 15px; margin: 20px 0; }\n'
+ '.card { padding: 20px; border-radius: 8px; text-align: center; color: white; }\n'
+ '.card .count { font-size: 36px; font-weight: bold; }\n'
+ '.card .label { font-size: 14px; opacity: 0.9; }\n'
+ '.critical { background: #dc3545; } .high { background: #fd7e14; }\n'
+ '.moderate { background: #ffc107; color: #333; } .low { background: #28a745; }\n'
+ 'table { width: 100%; border-collapse: collapse; background: white; border-radius: 8px; overflow: hidden; }\n'
+ 'th { background: #343a40; color: white; padding: 12px; text-align: left; }\n'
+ 'td { padding: 10px 12px; border-bottom: 1px solid #eee; }\n'
+ '.badge { padding: 3px 8px; border-radius: 4px; font-size: 12px; color: white; }\n'
+ '.fix-yes { color: #28a745; font-weight: bold; } .fix-no { color: #dc3545; }\n'
+ '.summary { background: white; padding: 20px; border-radius: 8px; margin: 20px 0; }\n'
+ '</style>\n</head>\n<body>\n'
+ '<h1>Dependency Vulnerability Dashboard</h1>\n'
+ '<p>Generated: ' + new Date().toISOString() + ' | Total packages scanned: '
+ exec("npm ls --all --json 2>/dev/null | node -e \"var j=JSON.parse(require('fs').readFileSync('/dev/stdin','utf8')); var c=0; function count(d){c++;Object.keys(d.dependencies||{}).forEach(function(k){count(d.dependencies[k])})} count(j); console.log(c)\"", { encoding: "utf8" }).trim()
+ '</p>\n'
+ '<div class="cards">\n'
+ '<div class="card critical"><div class="count">' + counts.critical + '</div><div class="label">Critical</div></div>\n'
+ '<div class="card high"><div class="count">' + counts.high + '</div><div class="label">High</div></div>\n'
+ '<div class="card moderate"><div class="count">' + counts.moderate + '</div><div class="label">Moderate</div></div>\n'
+ '<div class="card low"><div class="count">' + counts.low + '</div><div class="label">Low</div></div>\n'
+ '</div>\n'
+ '<div class="summary">\n'
+ '<strong>Summary:</strong> ' + vulnerabilities.length + ' vulnerabilities found | '
+ fixableCount + ' fixable | ' + directCount + ' in direct dependencies\n'
+ '</div>\n'
+ '<table>\n'
+ '<tr><th>Package</th><th>Severity</th><th>CVSS</th><th>Title</th><th>Fix</th><th>Direct</th></tr>\n';
vulnerabilities.forEach(function(v) {
html += '<tr>'
+ '<td><strong>' + v.package + '</strong><br><small>' + v.range + '</small></td>'
+ '<td><span class="badge ' + v.severity + '">' + v.severity.toUpperCase() + '</span></td>'
+ '<td>' + (v.cvss || 'N/A') + '</td>'
+ '<td>' + (v.url ? '<a href="' + v.url + '">' + v.title + '</a>' : v.title) + '</td>'
+ '<td class="' + (v.fixAvailable ? 'fix-yes' : 'fix-no') + '">' + (v.fixAvailable ? 'Yes' : 'No') + '</td>'
+ '<td>' + (v.isDirect ? 'Direct' : 'Transitive') + '</td>'
+ '</tr>\n';
});
html += '</table>\n</body>\n</html>';
fs.writeFileSync(outputFile, html);
console.log("Dashboard written to " + outputFile);
}
// Evaluate against thresholds
function evaluateThresholds(vulnerabilities) {
var counts = { critical: 0, high: 0, moderate: 0, low: 0 };
vulnerabilities.forEach(function(v) {
counts[v.severity] = (counts[v.severity] || 0) + 1;
});
var passed = true;
var messages = [];
if (counts.critical > config.thresholds.critical) {
passed = false;
messages.push("Critical: " + counts.critical + " (max " + config.thresholds.critical + ")");
}
if (counts.high > config.thresholds.high) {
passed = false;
messages.push("High: " + counts.high + " (max " + config.thresholds.high + ")");
}
if (counts.moderate > config.thresholds.medium) {
passed = false;
messages.push("Moderate: " + counts.moderate + " (max " + config.thresholds.medium + ")");
}
console.log("\n=== Threshold Evaluation ===");
console.log("Status: " + (passed ? "PASSED" : "FAILED"));
if (messages.length > 0) {
console.log("Exceeded thresholds:");
messages.forEach(function(m) { console.log(" " + m); });
}
return passed;
}
// Main execution
console.log("=== Dependency Vulnerability Scan ===\n");
var vulnerabilities = runAudit();
console.log("Found " + vulnerabilities.length + " vulnerabilities\n");
var newVulns = findNewVulnerabilities(vulnerabilities, "previous-scan.json");
if (newVulns.length > 0) {
console.log("NEW since last scan: " + newVulns.length);
newVulns.forEach(function(v) {
console.log(" [NEW] " + v.package + " (" + v.severity + "): " + v.title);
});
}
// Save current scan for next comparison
fs.writeFileSync("previous-scan.json", JSON.stringify(vulnerabilities, null, 2));
// Generate dashboard
generateDashboard(vulnerabilities, "vulnerability-dashboard.html");
// Evaluate thresholds
var passed = evaluateThresholds(vulnerabilities);
if (!passed) {
console.log("\n##vso[task.complete result=Failed;]Vulnerability thresholds exceeded");
process.exit(1);
}
Output:
=== Dependency Vulnerability Scan ===
Found 14 vulnerabilities
NEW since last scan: 2
[NEW] axios (high): Server-Side Request Forgery
[NEW] jsonwebtoken (moderate): Improper verification of cryptographic signature
Dashboard written to vulnerability-dashboard.html
=== Threshold Evaluation ===
Status: PASSED
Common Issues & Troubleshooting
npm audit Returns Different Results Locally vs Pipeline
Local: 2 critical, 5 high
Pipeline: 0 critical, 3 high
This happens when package-lock.json is out of sync. The pipeline runs npm ci which uses the lockfile exactly, while local npm install may resolve differently:
# Fix: regenerate lockfile and commit
rm -rf node_modules package-lock.json
npm install
git add package-lock.json
git commit -m "fix: regenerate lockfile for consistent audit results"
"npm audit" Exits with Code 1 Even for Low Vulnerabilities
npm audit exits non-zero when any vulnerabilities exist. Your pipeline script must handle this:
# Wrong: pipeline fails even for informational findings
npm audit
# Right: capture exit code and evaluate separately
npm audit --json > audit.json || true
node evaluate-audit.js audit.json
Snyk Reports Vulnerability in Package You Do Not Use
Transitive dependencies often include optional or platform-specific packages that are never loaded:
# Check if the vulnerable package is actually loaded at runtime
node -e "
try {
require.resolve('vulnerable-package');
console.log('Package IS resolvable at runtime');
} catch (e) {
console.log('Package is NOT loaded at runtime - likely optional');
}
"
If the package is not loaded, add it to your .snyk policy file:
# .snyk
ignore:
'SNYK-JS-VULN-12345':
- '*':
reason: 'Optional dependency, not loaded at runtime'
expires: '2026-06-01T00:00:00.000Z'
OWASP Dependency-Check Takes Too Long
The first run downloads the full NVD database, which can take 20+ minutes:
# Cache the NVD database between runs
steps:
- task: Cache@2
inputs:
key: 'owasp-dc | "$(Agent.OS)"'
path: '$(Agent.TempDirectory)/dependency-check-data'
- task: dependency-check-build-task@6
inputs:
dataDirectory: '$(Agent.TempDirectory)/dependency-check-data'
Vulnerability Has No Fix Available
When a critical vulnerability has no patch:
// Document accepted risk with compensating controls
var acceptedRisks = [
{
package: "legacy-xml-parser",
vulnerability: "CVE-2025-99999",
severity: "high",
reason: "No fix available from maintainer",
compensatingControls: [
"Input validation middleware strips XML before parsing",
"WAF rule blocks payloads matching exploit pattern",
"Package is isolated in sandboxed worker process"
],
reviewDate: "2026-04-01",
owner: "[email protected]"
}
];
// Write to audit trail
fs.writeFileSync("accepted-risks.json", JSON.stringify(acceptedRisks, null, 2));
Best Practices
- Scan on every PR, not just main — Vulnerabilities introduced in feature branches should be caught before merge. Run audit on every pull request build.
- Separate production and dev dependency scanning — Failing a build because a test utility has a moderate vulnerability wastes developer time. Use
npm audit --omit=devfor production gate decisions. - Set graduated thresholds, not zero tolerance — Zero critical, up to 3 high, up to 10 moderate is a reasonable starting point. Zero tolerance on all severities leads to pipeline fatigue and ignored results.
- Automate fix PRs for simple upgrades — When
npm audit fixcan resolve a vulnerability without breaking changes, create the PR automatically. Developers review and merge faster than they triage and fix manually. - Track vulnerability trends over time — A dashboard showing total vulnerabilities per week tells you if your dependency hygiene is improving or degrading. Store scan results as pipeline artifacts.
- Use lockfiles and pin versions —
package-lock.jsonensures reproducible installs. Pin critical dependencies to exact versions and use Dependabot or Renovate for controlled updates. - Run license compliance alongside vulnerability scanning — A clean vulnerability scan means nothing if a dependency's license is incompatible with your product. Check licenses in the same pipeline stage.
- Set up weekly scheduled scans against main — New CVEs are published daily. A passing build from last week may have new critical vulnerabilities today. Scheduled scans catch these between code changes.