Webhook Automation with Azure DevOps
Build webhook-driven automation with Azure DevOps service hooks for event routing, auto-assignment, and notification triggers
Webhook Automation with Azure DevOps
Azure DevOps service hooks let you push real-time event notifications to external systems via webhooks. Instead of polling APIs on a timer and hoping you catch changes, webhooks deliver events the moment they happen — a build fails, a pull request opens, a work item transitions. This article walks through building production-grade webhook receivers in Node.js that handle Azure DevOps events reliably at scale.
Prerequisites
- An Azure DevOps organization with a project
- Node.js v16 or later
- Basic familiarity with Express.js
- A publicly accessible endpoint (or ngrok for local development)
- A Personal Access Token (PAT) with appropriate scopes for API management
Service Hooks Overview
Azure DevOps service hooks are the event distribution system built into the platform. When something happens in your project — a commit is pushed, a build completes, a work item changes state — the service hooks infrastructure captures that event and forwards it to any registered consumers.
Webhooks are one consumer type among several. Azure DevOps also supports pushing events to Slack, Teams, Jenkins, Trello, and other services natively. But webhooks give you the most flexibility because you control the receiver entirely. You parse the payload, decide what to do with it, and execute whatever logic your workflow demands.
The flow is straightforward:
- You register a subscription in Azure DevOps, specifying which event type you care about and where to send it.
- When that event fires, Azure DevOps serializes the event data into a JSON payload.
- Azure DevOps POSTs that payload to your webhook URL.
- Your server processes the event and returns a 200 response.
If your server does not respond with a 2xx status within 20 seconds, Azure DevOps marks the delivery as failed and queues it for retry.
Supported Event Types
Azure DevOps organizes events into categories. Here are the ones you will use most often:
Build Events
build.complete— A build finishes (succeeded, partially succeeded, or failed)
Code Events
git.push— Code is pushed to a repositorygit.pullrequest.created— A new pull request is openedgit.pullrequest.updated— A pull request is updated (new commits, status change)git.pullrequest.merged— A pull request is mergedtfvc.checkin— A TFVC changeset is checked in
Work Item Events
workitem.created— A new work item is createdworkitem.updated— A work item field changesworkitem.deleted— A work item is deletedworkitem.restored— A deleted work item is restoredworkitem.commented— A comment is added to a work item
Release Events
ms.vss-release.release-created-event— A release is createdms.vss-release.release-abandoned-event— A release is abandonedms.vss-release.deployment-completed-event— A deployment finishes
Pipeline Events
ms.vss-pipelines.run-state-changed-event— A pipeline run changes statems.vss-pipelines.stage-state-changed-event— A pipeline stage changes state
Each event type has a specific payload structure, but they all share a common envelope format.
Configuring Webhook Subscriptions
You can set up webhook subscriptions through the Azure DevOps UI or the REST API. The UI approach is fine for a handful of subscriptions, but the API is what you want for managing subscriptions at scale.
Through the UI:
- Navigate to Project Settings > Service hooks.
- Click "Create subscription."
- Select "Web Hooks" as the service.
- Choose the event type (e.g., "Build completed").
- Configure filters (specific build definition, branch, etc.).
- Enter your webhook URL.
- Test the connection and save.
Through the REST API:
var https = require("https");
function createSubscription(orgUrl, project, pat, config) {
var authHeader = "Basic " + Buffer.from(":" + pat).toString("base64");
var body = JSON.stringify({
publisherId: config.publisherId,
eventType: config.eventType,
resourceVersion: "1.0",
consumerId: "webHooks",
consumerActionId: "httpRequest",
publisherInputs: config.filters || {},
consumerInputs: {
url: config.webhookUrl,
httpHeaders: "X-Webhook-Secret:" + config.secret,
resourceDetailsToSend: "all",
messagesToSend: "all",
detailedMessagesToSend: "all"
}
});
var url = new URL(orgUrl + "/_apis/hooks/subscriptions?api-version=7.1");
var options = {
hostname: url.hostname,
path: url.pathname + url.search,
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": authHeader,
"Content-Length": Buffer.byteLength(body)
}
};
var req = https.request(options, function(res) {
var data = "";
res.on("data", function(chunk) { data += chunk; });
res.on("end", function() {
console.log("Subscription created:", JSON.parse(data).id);
});
});
req.write(body);
req.end();
}
// Create a build.complete subscription
createSubscription(
"https://dev.azure.com/myorg",
"MyProject",
process.env.AZURE_PAT,
{
publisherId: "pipelines",
eventType: "build.complete",
filters: {
projectId: "your-project-id",
definitionName: "CI-Pipeline"
},
webhookUrl: "https://hooks.example.com/azure-devops",
secret: process.env.WEBHOOK_SECRET
}
);
Webhook Payload Structure
Every webhook payload from Azure DevOps follows the same envelope structure. Understanding this format is critical for building a reliable handler.
{
"subscriptionId": "00000000-0000-0000-0000-000000000000",
"notificationId": 42,
"id": "unique-event-id",
"eventType": "build.complete",
"publisherId": "pipelines",
"message": {
"text": "Build 20260213.1 succeeded",
"html": "<a href=\"...\">Build 20260213.1</a> succeeded",
"markdown": "[Build 20260213.1](...) succeeded"
},
"detailedMessage": {
"text": "Build 20260213.1 succeeded\nTriggered by: Shane Larson\nBranch: refs/heads/main",
"html": "...",
"markdown": "..."
},
"resource": {
// Event-specific data lives here
},
"resourceVersion": "1.0",
"resourceContainers": {
"collection": { "id": "...", "baseUrl": "..." },
"account": { "id": "...", "baseUrl": "..." },
"project": { "id": "...", "baseUrl": "..." }
},
"createdDate": "2026-02-13T14:30:00.000Z"
}
The eventType field tells you what happened. The resource object contains the actual event data — a build result, a pull request object, a work item, etc. The resourceContainers section gives you the org, project, and collection context so your handler knows where to make follow-up API calls.
Building a Webhook Receiver with Express.js
Here is a minimal but production-ready webhook receiver. It validates incoming requests, routes events to the correct handler, and returns appropriate status codes.
var express = require("express");
var crypto = require("crypto");
var app = express();
app.use(express.json({ limit: "1mb" }));
var WEBHOOK_SECRET = process.env.WEBHOOK_SECRET || "";
function verifySignature(req) {
if (!WEBHOOK_SECRET) return true; // Skip verification if no secret configured
var signature = req.headers["x-webhook-secret"];
if (!signature) return false;
return crypto.timingSafeEqual(
Buffer.from(signature),
Buffer.from(WEBHOOK_SECRET)
);
}
app.post("/webhooks/azure-devops", function(req, res) {
if (!verifySignature(req)) {
console.error("Invalid webhook signature");
return res.status(401).json({ error: "Invalid signature" });
}
var payload = req.body;
var eventType = payload.eventType;
if (!eventType) {
return res.status(400).json({ error: "Missing eventType" });
}
console.log("Received event:", eventType, "notification:", payload.notificationId);
// Acknowledge immediately, process asynchronously
res.status(200).json({ received: true });
// Process after responding
processEvent(eventType, payload).catch(function(err) {
console.error("Error processing event:", eventType, err.message);
});
});
function processEvent(eventType, payload) {
var handler = eventHandlers[eventType];
if (!handler) {
console.log("No handler for event type:", eventType);
return Promise.resolve();
}
return handler(payload);
}
var eventHandlers = {};
app.listen(process.env.PORT || 3000, function() {
console.log("Webhook receiver listening on port", process.env.PORT || 3000);
});
The key design decision here is responding with 200 before processing. Azure DevOps gives you 20 seconds to respond. If your handler logic takes longer — making API calls, writing to databases, sending notifications — you risk timeouts and unnecessary retries. Acknowledge receipt immediately, then process in the background.
Payload Verification and Security
Webhook security matters. Without verification, anyone who discovers your endpoint URL can send fabricated events and trigger your automation incorrectly.
Azure DevOps supports custom HTTP headers on webhook deliveries. The standard pattern is to include a shared secret:
function createHmacSignature(body, secret) {
return crypto
.createHmac("sha256", secret)
.update(JSON.stringify(body))
.digest("hex");
}
function verifyHmac(req, secret) {
var expected = req.headers["x-hub-signature"];
if (!expected) return false;
var computed = createHmacSignature(req.body, secret);
return crypto.timingSafeEqual(
Buffer.from(expected, "utf8"),
Buffer.from(computed, "utf8")
);
}
Beyond the shared secret, add these layers:
- IP allowlisting: Azure DevOps publishes its IP ranges. Restrict your endpoint to those ranges at the network level.
- Rate limiting: Cap the number of requests your endpoint accepts per minute. A misconfigured subscription can flood your server.
- Request validation: Verify the
publisherId,eventType, andresourceContainersfields match your expected project. Drop anything that does not match.
var rateLimit = require("express-rate-limit");
var webhookLimiter = rateLimit({
windowMs: 60 * 1000,
max: 100,
message: { error: "Too many requests" }
});
app.use("/webhooks", webhookLimiter);
function validatePayloadOrigin(payload, expectedProjectId) {
if (!payload.resourceContainers) return false;
if (!payload.resourceContainers.project) return false;
return payload.resourceContainers.project.id === expectedProjectId;
}
Event Filtering Strategies
Not every event needs a response. Azure DevOps lets you filter at the subscription level — only trigger for specific build definitions, specific branches, or specific work item types. But sometimes you need finer-grained filtering in your handler.
var eventFilters = {
"build.complete": function(payload) {
var build = payload.resource;
// Only process failed builds on main branch
if (build.result !== "failed") return false;
if (build.sourceBranch !== "refs/heads/main") return false;
return true;
},
"git.pullrequest.created": function(payload) {
var pr = payload.resource;
// Skip draft PRs
if (pr.isDraft) return false;
// Only PRs targeting main
if (pr.targetRefName !== "refs/heads/main") return false;
return true;
},
"workitem.updated": function(payload) {
var item = payload.resource;
var fields = item.revision && item.revision.fields;
if (!fields) return false;
// Only process state changes
var changedFields = payload.resource.fields || {};
return changedFields["System.State"] !== undefined;
}
};
function shouldProcess(eventType, payload) {
var filter = eventFilters[eventType];
if (!filter) return true; // No filter means process everything
return filter(payload);
}
Apply filters before dispatching to handlers. This keeps your handler code focused on business logic rather than input validation.
Handling Specific Events
Build Completed
The build completed event fires when any build finishes, regardless of result. The resource object contains the full build record.
eventHandlers["build.complete"] = function(payload) {
var build = payload.resource;
var result = build.result; // succeeded, partiallySucceeded, failed, canceled
var definition = build.definition.name;
var branch = build.sourceBranch;
var requestedFor = build.requestedFor.displayName;
var buildUrl = build._links.web.href;
console.log("Build", definition, "on", branch, "result:", result);
if (result === "failed") {
return notifyBuildFailure({
pipeline: definition,
branch: branch,
user: requestedFor,
url: buildUrl,
reason: build.result
});
}
if (result === "succeeded" && branch === "refs/heads/main") {
return triggerDeployment(definition, build.id);
}
return Promise.resolve();
};
Pull Request Created
When a PR is created, you often want to auto-assign reviewers, run checks, or post comments.
eventHandlers["git.pullrequest.created"] = function(payload) {
var pr = payload.resource;
var repoName = pr.repository.name;
var authorId = pr.createdBy.id;
var targetBranch = pr.targetRefName;
var prId = pr.pullRequestId;
console.log("PR #" + prId + " created in", repoName, "by", pr.createdBy.displayName);
var tasks = [];
// Auto-assign reviewers based on repository
var reviewers = getReviewersForRepo(repoName, authorId);
if (reviewers.length > 0) {
tasks.push(assignReviewers(pr, reviewers));
}
// Add required labels
if (targetBranch === "refs/heads/main") {
tasks.push(addPrLabel(pr, "needs-review"));
}
return Promise.all(tasks);
};
function getReviewersForRepo(repoName, authorId) {
var reviewerMap = {
"api-service": ["user-id-1", "user-id-2", "user-id-3"],
"web-frontend": ["user-id-4", "user-id-5"],
"infrastructure": ["user-id-6", "user-id-7"]
};
var candidates = reviewerMap[repoName] || [];
// Do not assign the PR author as a reviewer
return candidates.filter(function(id) {
return id !== authorId;
});
}
Work Item Updated
Work item update events include both the current state and the changed fields. Use this to build workflow automation.
eventHandlers["workitem.updated"] = function(payload) {
var resource = payload.resource;
var workItemId = resource.workItemId || resource.id;
var fields = resource.revision.fields;
var changedFields = resource.fields || {};
// Detect state transition
if (changedFields["System.State"]) {
var oldState = changedFields["System.State"].oldValue;
var newState = changedFields["System.State"].newValue;
console.log("Work item", workItemId, "moved from", oldState, "to", newState);
if (newState === "Resolved") {
return onWorkItemResolved(workItemId, fields);
}
if (newState === "Closed") {
return onWorkItemClosed(workItemId, fields);
}
}
return Promise.resolve();
};
function onWorkItemResolved(workItemId, fields) {
var assignedTo = fields["System.AssignedTo"];
var title = fields["System.Title"];
// Notify the reporter that work is ready for verification
return sendNotification({
type: "work-item-resolved",
workItemId: workItemId,
title: title,
assignedTo: assignedTo
});
}
Webhook Retry Behavior
Azure DevOps retries failed webhook deliveries using an exponential backoff strategy. Understanding this behavior is important for building idempotent handlers.
The retry schedule is roughly:
- First retry: 1 minute after failure
- Second retry: 2 minutes after first retry
- Third retry: 4 minutes after second retry
- Continues with exponential backoff up to a maximum of about 8 retries
A delivery is considered failed if:
- Your server returns a non-2xx status code
- The connection times out (20-second timeout)
- The connection is refused or unreachable
This means your handlers must be idempotent. If you receive the same notificationId twice, the second processing should not cause duplicate side effects.
var processedNotifications = new Map();
var DEDUP_WINDOW_MS = 10 * 60 * 1000; // 10 minutes
function isDuplicate(notificationId) {
if (processedNotifications.has(notificationId)) {
return true;
}
processedNotifications.set(notificationId, Date.now());
return false;
}
// Clean up old entries periodically
setInterval(function() {
var cutoff = Date.now() - DEDUP_WINDOW_MS;
processedNotifications.forEach(function(timestamp, key) {
if (timestamp < cutoff) {
processedNotifications.delete(key);
}
});
}, 60 * 1000);
app.post("/webhooks/azure-devops", function(req, res) {
var payload = req.body;
if (isDuplicate(payload.notificationId)) {
console.log("Duplicate notification:", payload.notificationId);
return res.status(200).json({ received: true, duplicate: true });
}
// ... rest of handler
});
For production systems, replace the in-memory Map with Redis or another shared store so deduplication works across multiple server instances.
Building an Event Bus with Node.js
When your webhook handling grows beyond a few if-else branches, you need a proper event bus. This decouples event reception from event processing and makes it easy to add new handlers without modifying the core receiver.
var EventEmitter = require("events");
function WebhookEventBus() {
this.emitter = new EventEmitter();
this.emitter.setMaxListeners(50);
this.middleware = [];
}
WebhookEventBus.prototype.use = function(fn) {
this.middleware.push(fn);
};
WebhookEventBus.prototype.on = function(eventType, handler) {
this.emitter.on(eventType, handler);
};
WebhookEventBus.prototype.dispatch = function(payload) {
var self = this;
var eventType = payload.eventType;
// Run middleware chain
var context = {
payload: payload,
eventType: eventType,
metadata: {},
skip: false
};
return self.middleware
.reduce(function(chain, fn) {
return chain.then(function() {
if (context.skip) return;
return fn(context);
});
}, Promise.resolve())
.then(function() {
if (context.skip) {
console.log("Event skipped by middleware:", eventType);
return;
}
self.emitter.emit(eventType, context);
self.emitter.emit("*", context); // Wildcard for logging/metrics
});
};
// Usage
var bus = new WebhookEventBus();
// Middleware: enrich payload with project info
bus.use(function(context) {
var containers = context.payload.resourceContainers;
if (containers && containers.project) {
context.metadata.projectId = containers.project.id;
context.metadata.orgUrl = containers.account.baseUrl;
}
});
// Middleware: filter events
bus.use(function(context) {
if (!shouldProcess(context.eventType, context.payload)) {
context.skip = true;
}
});
// Handler: build failures
bus.on("build.complete", function(context) {
var build = context.payload.resource;
if (build.result === "failed") {
notifyBuildFailure(build);
}
});
// Handler: metrics (wildcard)
bus.on("*", function(context) {
trackMetric("webhook.received", {
eventType: context.eventType,
project: context.metadata.projectId
});
});
Webhook Management via REST API
Once you have more than a few subscriptions, managing them through the UI is painful. The REST API gives you full programmatic control.
var https = require("https");
function AzureDevOpsClient(orgUrl, pat) {
this.orgUrl = orgUrl;
this.authHeader = "Basic " + Buffer.from(":" + pat).toString("base64");
}
AzureDevOpsClient.prototype.request = function(method, path, body) {
var self = this;
var url = new URL(self.orgUrl + path);
return new Promise(function(resolve, reject) {
var options = {
hostname: url.hostname,
path: url.pathname + url.search,
method: method,
headers: {
"Content-Type": "application/json",
"Authorization": self.authHeader
}
};
var req = https.request(options, function(res) {
var data = "";
res.on("data", function(chunk) { data += chunk; });
res.on("end", function() {
if (res.statusCode >= 200 && res.statusCode < 300) {
resolve(JSON.parse(data));
} else {
reject(new Error("HTTP " + res.statusCode + ": " + data));
}
});
});
req.on("error", reject);
if (body) {
req.write(JSON.stringify(body));
}
req.end();
});
};
AzureDevOpsClient.prototype.listSubscriptions = function() {
return this.request("GET", "/_apis/hooks/subscriptions?api-version=7.1");
};
AzureDevOpsClient.prototype.deleteSubscription = function(subscriptionId) {
return this.request("DELETE", "/_apis/hooks/subscriptions/" + subscriptionId + "?api-version=7.1");
};
AzureDevOpsClient.prototype.getSubscriptionStatus = function(subscriptionId) {
return this.request("GET", "/_apis/hooks/subscriptions/" + subscriptionId + "?api-version=7.1")
.then(function(sub) {
return {
id: sub.id,
eventType: sub.eventType,
status: sub.status,
lastProbationRetryDate: sub.lastProbationRetryDate,
probationRetries: sub.probationRetries
};
});
};
// Audit all subscriptions
var client = new AzureDevOpsClient("https://dev.azure.com/myorg", process.env.AZURE_PAT);
client.listSubscriptions().then(function(result) {
result.value.forEach(function(sub) {
console.log(sub.id, sub.eventType, sub.status, sub.consumerInputs.url);
});
});
Monitoring Webhook Health
Webhook subscriptions can go stale. Azure DevOps puts subscriptions into "probation" if deliveries fail repeatedly. Once in probation, the subscription is effectively dead until you fix the issue and Azure DevOps lifts the probation automatically on the next successful delivery.
Build a health check endpoint that monitors your subscriptions:
var webhookHealth = {
received: 0,
processed: 0,
errors: 0,
lastEvent: null,
eventCounts: {}
};
bus.on("*", function(context) {
webhookHealth.received++;
webhookHealth.lastEvent = new Date().toISOString();
var type = context.eventType;
webhookHealth.eventCounts[type] = (webhookHealth.eventCounts[type] || 0) + 1;
});
app.get("/webhooks/health", function(req, res) {
var now = Date.now();
var lastEventTime = webhookHealth.lastEvent ? new Date(webhookHealth.lastEvent).getTime() : 0;
var minutesSinceLastEvent = lastEventTime ? Math.floor((now - lastEventTime) / 60000) : -1;
var status = "healthy";
if (minutesSinceLastEvent > 60) status = "degraded";
if (minutesSinceLastEvent > 240 || minutesSinceLastEvent === -1) status = "unhealthy";
res.json({
status: status,
uptime: process.uptime(),
received: webhookHealth.received,
processed: webhookHealth.processed,
errors: webhookHealth.errors,
lastEvent: webhookHealth.lastEvent,
minutesSinceLastEvent: minutesSinceLastEvent,
eventCounts: webhookHealth.eventCounts
});
});
Additionally, set up a cron job or monitoring script that queries the Azure DevOps API to check subscription statuses. Alert if any subscription enters probation.
function checkSubscriptionHealth(client, subscriptionIds) {
var checks = subscriptionIds.map(function(id) {
return client.getSubscriptionStatus(id);
});
return Promise.all(checks).then(function(results) {
var unhealthy = results.filter(function(sub) {
return sub.status !== "enabled";
});
if (unhealthy.length > 0) {
console.error("Unhealthy subscriptions:", unhealthy);
sendAlert("Webhook subscriptions in probation: " + unhealthy.length);
}
return results;
});
}
Scaling Webhook Handlers
When event volume grows, a single Express server will not cut it. Here are the patterns that work:
Queue-based processing: Accept the webhook, push the payload to a message queue (Redis, RabbitMQ, SQS), and return 200 immediately. Worker processes consume from the queue at their own pace.
var Redis = require("ioredis");
var redis = new Redis(process.env.REDIS_URL);
app.post("/webhooks/azure-devops", function(req, res) {
if (!verifySignature(req)) {
return res.status(401).json({ error: "Invalid signature" });
}
var payload = req.body;
redis.lpush("webhook-events", JSON.stringify({
receivedAt: new Date().toISOString(),
payload: payload
})).then(function() {
res.status(200).json({ received: true, queued: true });
}).catch(function(err) {
console.error("Failed to queue event:", err.message);
res.status(500).json({ error: "Queue unavailable" });
});
});
// Worker process (separate file)
function startWorker() {
console.log("Worker started, waiting for events...");
function processNext() {
redis.brpop("webhook-events", 30).then(function(result) {
if (result) {
var event = JSON.parse(result[1]);
return processEvent(event.payload.eventType, event.payload);
}
}).then(function() {
setImmediate(processNext);
}).catch(function(err) {
console.error("Worker error:", err.message);
setTimeout(processNext, 5000);
});
}
processNext();
}
Horizontal scaling: Run multiple receiver instances behind a load balancer. Since each request is independent, this scales linearly. Use the Redis-backed deduplication mentioned earlier to handle duplicate deliveries across instances.
Partitioning by event type: Route different event types to different handler services. Build events go to the CI/CD automation service, PR events go to the code review bot, work item events go to the project management integration. Use the webhook URL itself to partition — register different URLs for different event types.
Complete Working Example
Here is a full webhook server that ties everything together. It handles Azure DevOps events, routes them to specific handlers, auto-assigns reviewers on PRs, posts Slack-style notifications on build failures, and tracks work item transitions.
var express = require("express");
var crypto = require("crypto");
var https = require("https");
var EventEmitter = require("events");
// ---- Configuration ----
var CONFIG = {
port: process.env.PORT || 3000,
webhookSecret: process.env.WEBHOOK_SECRET || "",
azurePat: process.env.AZURE_PAT || "",
azureOrg: process.env.AZURE_ORG || "",
slackWebhookUrl: process.env.SLACK_WEBHOOK_URL || "",
expectedProjectId: process.env.AZURE_PROJECT_ID || ""
};
// ---- Event Bus ----
function EventBus() {
this.emitter = new EventEmitter();
this.emitter.setMaxListeners(50);
}
EventBus.prototype.on = function(event, handler) {
this.emitter.on(event, handler);
};
EventBus.prototype.emit = function(event, data) {
this.emitter.emit(event, data);
};
var bus = new EventBus();
// ---- Deduplication ----
var processedEvents = new Map();
function isDuplicate(notificationId) {
if (!notificationId) return false;
if (processedEvents.has(notificationId)) return true;
processedEvents.set(notificationId, Date.now());
return false;
}
setInterval(function() {
var cutoff = Date.now() - 600000;
processedEvents.forEach(function(ts, key) {
if (ts < cutoff) processedEvents.delete(key);
});
}, 60000);
// ---- Security ----
function verifySignature(req) {
if (!CONFIG.webhookSecret) return true;
var sig = req.headers["x-webhook-secret"];
if (!sig) return false;
return crypto.timingSafeEqual(
Buffer.from(sig, "utf8"),
Buffer.from(CONFIG.webhookSecret, "utf8")
);
}
// ---- Azure DevOps API Helper ----
function azureApi(method, path, body) {
var url = new URL("https://dev.azure.com/" + CONFIG.azureOrg + path);
var auth = "Basic " + Buffer.from(":" + CONFIG.azurePat).toString("base64");
return new Promise(function(resolve, reject) {
var options = {
hostname: url.hostname,
path: url.pathname + url.search,
method: method,
headers: {
"Content-Type": "application/json",
"Authorization": auth
}
};
var req = https.request(options, function(res) {
var data = "";
res.on("data", function(chunk) { data += chunk; });
res.on("end", function() {
if (res.statusCode >= 200 && res.statusCode < 300) {
resolve(data ? JSON.parse(data) : null);
} else {
reject(new Error("Azure API " + res.statusCode + ": " + data));
}
});
});
req.on("error", reject);
if (body) req.write(JSON.stringify(body));
req.end();
});
}
// ---- Notification Helper ----
function sendSlackNotification(message) {
if (!CONFIG.slackWebhookUrl) {
console.log("[NOTIFICATION]", message.text);
return Promise.resolve();
}
return new Promise(function(resolve, reject) {
var url = new URL(CONFIG.slackWebhookUrl);
var body = JSON.stringify(message);
var options = {
hostname: url.hostname,
path: url.pathname,
method: "POST",
headers: {
"Content-Type": "application/json",
"Content-Length": Buffer.byteLength(body)
}
};
var req = https.request(options, function(res) {
var data = "";
res.on("data", function(chunk) { data += chunk; });
res.on("end", function() { resolve(data); });
});
req.on("error", reject);
req.write(body);
req.end();
});
}
// ---- Metrics ----
var metrics = {
received: 0,
processed: 0,
errors: 0,
lastEvent: null,
byType: {}
};
// ---- Event Handlers ----
// Auto-assign reviewers when a PR is created
bus.on("git.pullrequest.created", function(payload) {
var pr = payload.resource;
var repo = pr.repository.name;
var authorId = pr.createdBy.id;
var projectName = pr.repository.project.name;
var prId = pr.pullRequestId;
var repoId = pr.repository.id;
var reviewerPool = {
"api-service": ["reviewer-guid-1", "reviewer-guid-2", "reviewer-guid-3"],
"web-app": ["reviewer-guid-4", "reviewer-guid-5"],
"infrastructure": ["reviewer-guid-6", "reviewer-guid-7"]
};
var candidates = reviewerPool[repo] || [];
var reviewers = candidates.filter(function(id) { return id !== authorId; });
if (reviewers.length === 0) {
console.log("No reviewers configured for repo:", repo);
return;
}
// Pick two random reviewers
var selected = reviewers.sort(function() { return 0.5 - Math.random(); }).slice(0, 2);
var assignments = selected.map(function(reviewerId) {
var path = "/" + projectName + "/_apis/git/repositories/" + repoId +
"/pullRequests/" + prId + "/reviewers/" + reviewerId + "?api-version=7.1";
return azureApi("PUT", path, { vote: 0 });
});
Promise.all(assignments)
.then(function() {
console.log("Assigned", selected.length, "reviewers to PR #" + prId);
})
.catch(function(err) {
console.error("Failed to assign reviewers:", err.message);
});
});
// Notify on build failures
bus.on("build.complete", function(payload) {
var build = payload.resource;
if (build.result !== "failed") return;
var definition = build.definition.name;
var branch = build.sourceBranch.replace("refs/heads/", "");
var requestedBy = build.requestedFor.displayName;
var buildUrl = build._links.web.href;
sendSlackNotification({
text: "Build Failed: *" + definition + "* on `" + branch + "`",
blocks: [
{
type: "section",
text: {
type: "mrkdwn",
text: ":x: *Build Failed*\n" +
"*Pipeline:* " + definition + "\n" +
"*Branch:* `" + branch + "`\n" +
"*Triggered by:* " + requestedBy + "\n" +
"<" + buildUrl + "|View Build>"
}
}
]
}).catch(function(err) {
console.error("Slack notification failed:", err.message);
});
});
// Track work item state transitions for dashboard
bus.on("workitem.updated", function(payload) {
var resource = payload.resource;
var changedFields = resource.fields || {};
if (!changedFields["System.State"]) return;
var workItemId = resource.workItemId || resource.id;
var oldState = changedFields["System.State"].oldValue;
var newState = changedFields["System.State"].newValue;
var title = resource.revision.fields["System.Title"];
var itemType = resource.revision.fields["System.WorkItemType"];
console.log("[STATE CHANGE]", itemType, "#" + workItemId, ":", oldState, "->", newState);
// Update dashboard metrics
metrics.byType["state-change:" + oldState + "->" + newState] =
(metrics.byType["state-change:" + oldState + "->" + newState] || 0) + 1;
// Notify when bugs are resolved
if (itemType === "Bug" && newState === "Resolved") {
sendSlackNotification({
text: "Bug #" + workItemId + " resolved: " + title
});
}
});
// Log code pushes
bus.on("git.push", function(payload) {
var push = payload.resource;
var pushedBy = push.pushedBy.displayName;
var repo = push.repository.name;
var commits = push.commits || [];
console.log("[PUSH]", pushedBy, "pushed", commits.length, "commit(s) to", repo);
commits.forEach(function(commit) {
console.log(" -", commit.comment);
});
});
// ---- Express App ----
var app = express();
app.use(express.json({ limit: "1mb" }));
app.post("/webhooks/azure-devops", function(req, res) {
if (!verifySignature(req)) {
metrics.errors++;
return res.status(401).json({ error: "Unauthorized" });
}
var payload = req.body;
if (!payload.eventType) {
return res.status(400).json({ error: "Missing eventType" });
}
if (CONFIG.expectedProjectId && !validateProject(payload)) {
return res.status(403).json({ error: "Unknown project" });
}
if (isDuplicate(payload.notificationId)) {
return res.status(200).json({ received: true, duplicate: true });
}
metrics.received++;
metrics.lastEvent = new Date().toISOString();
metrics.byType[payload.eventType] = (metrics.byType[payload.eventType] || 0) + 1;
// Respond immediately
res.status(200).json({ received: true });
// Process asynchronously
try {
bus.emit(payload.eventType, payload);
metrics.processed++;
} catch (err) {
metrics.errors++;
console.error("Handler error:", payload.eventType, err.message);
}
});
function validateProject(payload) {
if (!payload.resourceContainers || !payload.resourceContainers.project) {
return false;
}
return payload.resourceContainers.project.id === CONFIG.expectedProjectId;
}
app.get("/webhooks/health", function(req, res) {
var now = Date.now();
var lastTime = metrics.lastEvent ? new Date(metrics.lastEvent).getTime() : 0;
var minsSinceLast = lastTime ? Math.floor((now - lastTime) / 60000) : -1;
var status = "healthy";
if (minsSinceLast > 60) status = "degraded";
if (minsSinceLast > 240 || minsSinceLast === -1) status = "unhealthy";
res.json({
status: status,
uptime: Math.floor(process.uptime()),
metrics: metrics,
minutesSinceLastEvent: minsSinceLast
});
});
app.get("/webhooks/metrics", function(req, res) {
res.json(metrics);
});
app.listen(CONFIG.port, function() {
console.log("Azure DevOps webhook server listening on port " + CONFIG.port);
console.log("Webhook endpoint: POST /webhooks/azure-devops");
console.log("Health check: GET /webhooks/health");
});
Save this as server.js, install express, and run it. For local development, use ngrok to expose it publicly:
npm init -y
npm install express
node server.js
In another terminal:
ngrok http 3000
Use the ngrok URL when creating your service hook subscription in Azure DevOps.
Common Issues and Troubleshooting
Subscription enters probation immediately after creation This usually means your endpoint is not reachable from Azure DevOps. Check that your URL is publicly accessible, your firewall allows inbound traffic on the correct port, and your server is actually running. Azure DevOps sends a test notification when you create a subscription — if that fails, it goes straight to probation. Use the "Test" button in the service hooks UI to diagnose.
Payloads arrive with missing resource details
When creating a subscription, you must set resourceDetailsToSend to "all" in the consumer inputs. The default is "minimal", which strips most fields from the resource object. This is the most common misconfiguration and results in handlers receiving nearly empty payloads. Through the UI, ensure the "Resource details to send" dropdown is set to "All."
Duplicate events causing duplicate actions
Azure DevOps may send the same event multiple times due to retries or internal reprocessing. Always implement deduplication based on notificationId. For critical operations like assigning reviewers or sending notifications, check whether the action has already been taken before executing it. An idempotency layer is not optional in production.
Webhook URL changes break existing subscriptions
If you move your server to a new domain or change the endpoint path, existing subscriptions keep pointing to the old URL. There is no bulk-update feature in the UI. Script the migration using the REST API — list all subscriptions, update each one's consumerInputs.url, and verify they are back in "enabled" status. Keep your webhook URL behind a stable domain or API gateway to avoid this entirely.
Timeouts on complex handler logic If your handler makes multiple downstream API calls before responding, you will hit the 20-second timeout. Always respond with 200 first, then process. If you are seeing timeout errors in the subscription diagnostics but your server is running fine, this is almost certainly the cause. Move to the queue-based pattern for anything non-trivial.
HTTP vs HTTPS endpoint rejection Azure DevOps requires HTTPS for webhook URLs in production. Self-signed certificates are not accepted. Use a valid TLS certificate from Let's Encrypt or your cloud provider. During local development, ngrok handles this for you automatically.
Best Practices
- Respond before processing. Return a 200 status immediately and handle the event asynchronously. Never let business logic block the HTTP response.
- Implement idempotency from day one. Deduplicate on
notificationIdand design handlers so that processing the same event twice produces the same result. This is not a nice-to-have; retries will happen. - Use the REST API to manage subscriptions. The UI is fine for prototyping but does not scale. Script your subscription creation so it is repeatable and auditable. Store subscription IDs in your configuration.
- Set
resourceDetailsToSendto "all". The minimal payload format is almost never useful. You need the full resource object to make decisions in your handlers. - Monitor subscription health. Build an automated check that queries the subscriptions API and alerts when any subscription enters probation. A dead webhook is silent — you will not know it is broken until someone notices the automation has stopped.
- Separate reception from processing. For any non-trivial workload, put a queue between your HTTP receiver and your event handlers. This decouples availability from processing capacity and gives you natural backpressure.
- Secure your endpoint. Use a shared secret, validate the
resourceContainersproject ID, rate-limit inbound requests, and restrict by IP range where possible. Treat your webhook endpoint with the same security rigor as any other API. - Log every received event. Even if you do not process an event type yet, log it. You will want this when debugging why a handler did not fire or when investigating what events are available for new automation.
- Keep handler logic focused. Each handler should do one thing. If a PR creation needs to assign reviewers, add labels, and post a comment, write three separate handlers registered on the same event. This makes testing and debugging dramatically easier.
- Test with real payloads. Azure DevOps includes sample payloads in the service hooks documentation. Capture actual payloads from your dev environment and use them as test fixtures. Synthetic payloads inevitably miss edge cases in the real data.