Integrations

Building Custom Service Hooks

Build custom Azure DevOps service hook consumers for event routing, analytics integration, and automated workflow triggers

Building Custom Service Hooks

Overview

Service hooks in Azure DevOps let you react to events like code pushes, build completions, and work item changes by sending notifications to external services. The built-in consumers cover common targets like Slack and Jenkins, but real-world integration needs almost always outgrow them. This article covers building custom service hook consumers from scratch, including extension-based consumers, webhook processing in Node.js, subscription management via REST API, and production-grade error handling patterns.

Prerequisites

  • An Azure DevOps organization with Project Collection Administrator access
  • Node.js v16 or later installed locally
  • Basic familiarity with the Azure DevOps REST API and personal access tokens (PATs)
  • A working understanding of Express.js for building webhook endpoints
  • The tfx-cli tool installed globally for extension packaging (npm install -g tfx-cli)

Service Hooks Architecture

Azure DevOps service hooks operate on a publisher-subscriber model. The platform itself is the publisher. It emits events when things happen inside your project: code gets pushed, a pull request opens, a build completes, a work item transitions. Consumers are the external targets that receive and act on those events.

The architecture has three layers:

  1. Publishers - Azure DevOps services that emit events (Git, Build, Release, Work Item Tracking, Pipelines)
  2. Subscriptions - Configuration objects that bind a specific event type to a specific consumer action, with optional filters
  3. Consumers - External services or endpoints that receive event payloads and act on them

When an event fires, the subscription engine evaluates all active subscriptions against the event payload. If a subscription matches, the engine formats the payload according to the consumer's expectations and delivers it. This evaluation happens synchronously within the Azure DevOps event pipeline, which is why poorly configured subscriptions can introduce latency into your workflows.

Each subscription stores its own delivery state. Azure DevOps tracks whether the last delivery succeeded or failed, how many consecutive failures have occurred, and when the subscription was last triggered. This per-subscription state is critical for monitoring integration health at scale.


Built-in vs Custom Consumers

Azure DevOps ships with consumers for common services: Slack, Microsoft Teams, Jenkins, Trello, Zendesk, Azure Service Bus, Azure Storage Queue, and generic webhooks. These cover maybe sixty percent of integration needs. The other forty percent is where custom consumers come in.

Built-in consumers have fixed input schemas. The Slack consumer expects a webhook URL and a channel name. The Jenkins consumer expects a server URL and a build token. You cannot add fields, change the payload format, or inject custom transformation logic. If your analytics platform expects events in a specific envelope format with custom headers and authentication, the generic webhook consumer gets you partway there but forces you to stand up a translation layer.

Custom consumers solve this by letting you define your own input schema, payload transformation, and delivery mechanism as an Azure DevOps extension. The consumer appears in the service hooks configuration UI alongside the built-in options, and project administrators can create subscriptions to it without writing code.

The decision matrix is straightforward:

  • Use built-in consumers when the target service has native support and you do not need custom payload shaping
  • Use the generic webhook consumer when you control the receiving endpoint and can accept raw Azure DevOps event payloads
  • Build a custom consumer extension when you need a polished UI experience, custom input fields, payload transformation at the subscription level, or distribution to other organizations via the marketplace

Event Types and Filters

Azure DevOps publishes events across several categories. Understanding the event taxonomy is essential before building consumers because your subscription filters determine which events reach your consumer.

Core Event Categories

Git Events:

  • git.push - Code pushed to a repository
  • git.pullrequest.created - Pull request created
  • git.pullrequest.updated - Pull request updated (reviewers, status, merge status)
  • git.pullrequest.merged - Pull request completed via merge

Build Events:

  • build.complete - Build finished (succeeded, failed, or partially succeeded)
  • ms.vss-pipelines.run-state-changed-event - Pipeline run state change

Work Item Events:

  • workitem.created - Work item created
  • workitem.updated - Work item field changed
  • workitem.deleted - Work item deleted
  • workitem.commented - Comment added to work item

Release Events:

  • ms.vss-release.release-created-event - Release created
  • ms.vss-release.deployment-completed-event - Deployment finished
  • ms.vss-release.deployment-approval-pending-event - Approval gate waiting

Subscription Filters

Every subscription can include filters that narrow which events trigger delivery. For a build.complete event, you can filter by build definition, build status, or branch. For git.push, you can filter by repository.

// Example: Creating a filtered subscription via REST API
var https = require("https");

var subscriptionPayload = {
  publisherId: "tfs",
  eventType: "build.complete",
  resourceVersion: "1.0",
  consumerId: "webHooks",
  consumerActionId: "httpRequest",
  publisherInputs: {
    buildStatus: "failed",
    definitionName: "production-api",
    projectId: "your-project-id"
  },
  consumerInputs: {
    url: "https://analytics.example.com/hooks/azure-devops",
    httpHeaders: "X-Hook-Secret:your-secret-here",
    resourceDetailsToSend: "all",
    messagesToSend: "all",
    detailedMessagesToSend: "all"
  }
};

Filters reduce noise dramatically. Without them, a busy repository with fifty builds per day floods your consumer with events you do not care about. Always filter to the narrowest scope that serves your use case.


Service Hook Subscription Management via REST API

The Azure DevOps REST API provides full CRUD operations for service hook subscriptions. This is how you automate subscription management across projects and organizations.

Creating Subscriptions Programmatically

var https = require("https");

function createSubscription(orgUrl, projectId, pat, subscription, callback) {
  var auth = Buffer.from(":" + pat).toString("base64");
  var body = JSON.stringify(subscription);
  var url = new URL(orgUrl + "/_apis/hooks/subscriptions?api-version=7.1");

  var options = {
    hostname: url.hostname,
    path: url.pathname + url.search,
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "Authorization": "Basic " + auth,
      "Content-Length": Buffer.byteLength(body)
    }
  };

  var req = https.request(options, function(res) {
    var data = "";
    res.on("data", function(chunk) { data += chunk; });
    res.on("end", function() {
      if (res.statusCode >= 200 && res.statusCode < 300) {
        callback(null, JSON.parse(data));
      } else {
        callback(new Error("HTTP " + res.statusCode + ": " + data));
      }
    });
  });

  req.on("error", callback);
  req.write(body);
  req.end();
}

// Usage
var subscription = {
  publisherId: "tfs",
  eventType: "git.push",
  consumerId: "webHooks",
  consumerActionId: "httpRequest",
  publisherInputs: {
    projectId: "my-project-id",
    repository: "my-repo-id"
  },
  consumerInputs: {
    url: "https://my-service.example.com/hooks/push",
    httpHeaders: "Authorization:Bearer my-token"
  }
};

createSubscription(
  "https://dev.azure.com/my-org",
  "my-project-id",
  process.env.AZURE_DEVOPS_PAT,
  subscription,
  function(err, result) {
    if (err) {
      console.error("Failed to create subscription:", err.message);
      return;
    }
    console.log("Subscription created:", result.id);
  }
);

Listing and Querying Subscriptions

function listSubscriptions(orgUrl, pat, callback) {
  var auth = Buffer.from(":" + pat).toString("base64");
  var url = new URL(orgUrl + "/_apis/hooks/subscriptions?api-version=7.1");

  var options = {
    hostname: url.hostname,
    path: url.pathname + url.search,
    method: "GET",
    headers: {
      "Authorization": "Basic " + auth
    }
  };

  var req = https.request(options, function(res) {
    var data = "";
    res.on("data", function(chunk) { data += chunk; });
    res.on("end", function() {
      var parsed = JSON.parse(data);
      callback(null, parsed.value || []);
    });
  });

  req.on("error", callback);
  req.end();
}

// List all and filter by event type
listSubscriptions(
  "https://dev.azure.com/my-org",
  process.env.AZURE_DEVOPS_PAT,
  function(err, subscriptions) {
    if (err) return console.error(err);

    var buildSubs = subscriptions.filter(function(s) {
      return s.eventType === "build.complete";
    });

    console.log("Build subscriptions:", buildSubs.length);
    buildSubs.forEach(function(s) {
      console.log(" -", s.id, s.status, s.consumerInputs.url);
    });
  }
);

Deleting Stale Subscriptions

Subscriptions accumulate over time, especially in organizations with frequent project turnover. Dead subscriptions waste evaluation cycles and generate failed delivery noise. Periodically audit and prune them.

function deleteSubscription(orgUrl, pat, subscriptionId, callback) {
  var auth = Buffer.from(":" + pat).toString("base64");
  var url = new URL(
    orgUrl + "/_apis/hooks/subscriptions/" + subscriptionId + "?api-version=7.1"
  );

  var options = {
    hostname: url.hostname,
    path: url.pathname + url.search,
    method: "DELETE",
    headers: {
      "Authorization": "Basic " + auth
    }
  };

  var req = https.request(options, function(res) {
    callback(res.statusCode === 204 ? null : new Error("HTTP " + res.statusCode));
  });

  req.on("error", callback);
  req.end();
}

Webhook vs Service Bus vs Storage Queue Consumers

The three main delivery mechanisms each serve different scaling profiles.

Webhooks are the simplest option. Azure DevOps makes an HTTP POST to your endpoint with the event payload as JSON. Your endpoint must respond within 25 seconds or the delivery is marked as failed. This works well for low-to-medium volume scenarios where you control the receiving server and can guarantee availability.

Azure Service Bus decouples the event producer from the consumer. Azure DevOps drops messages onto a Service Bus topic or queue, and your consumer pulls them at its own pace. This is the right choice when your consumer needs to process events asynchronously, when you cannot guarantee endpoint availability, or when you need to fan out a single event to multiple downstream processors.

Azure Storage Queue is the cheapest option for high-volume, order-insensitive event processing. Storage queues handle millions of messages per day at minimal cost. The tradeoff is weaker delivery guarantees and no built-in dead letter queue. Use this when you are doing bulk analytics ingestion and can tolerate occasional duplicate processing.

My recommendation: start with webhooks for development and low-volume production. Move to Service Bus when you hit reliability or scaling constraints. Storage queues are for cost-optimized batch scenarios.


Event Processing Patterns

Webhook Receiver with Express.js

var express = require("express");
var crypto = require("crypto");

var app = express();
app.use(express.json({ limit: "1mb" }));

// Signature verification middleware
function verifyHookSignature(secret) {
  return function(req, res, next) {
    var signature = req.headers["x-hook-secret"];
    if (!signature || signature !== secret) {
      console.warn("Invalid hook signature from", req.ip);
      return res.status(401).json({ error: "Invalid signature" });
    }
    next();
  };
}

// Event router
var eventHandlers = {};

function onEvent(eventType, handler) {
  if (!eventHandlers[eventType]) {
    eventHandlers[eventType] = [];
  }
  eventHandlers[eventType].push(handler);
}

function routeEvent(event) {
  var eventType = event.eventType;
  var handlers = eventHandlers[eventType] || [];

  if (handlers.length === 0) {
    console.log("No handlers for event type:", eventType);
    return Promise.resolve();
  }

  var promises = handlers.map(function(handler) {
    return new Promise(function(resolve) {
      try {
        var result = handler(event);
        if (result && typeof result.then === "function") {
          result.then(resolve).catch(function(err) {
            console.error("Handler error for", eventType, err.message);
            resolve();
          });
        } else {
          resolve();
        }
      } catch (err) {
        console.error("Handler threw for", eventType, err.message);
        resolve();
      }
    });
  });

  return Promise.all(promises);
}

// Register event handlers
onEvent("build.complete", function(event) {
  var resource = event.resource;
  console.log(
    "Build %s finished with status %s (duration: %dms)",
    resource.definition.name,
    resource.status,
    new Date(resource.finishTime) - new Date(resource.startTime)
  );
});

onEvent("git.push", function(event) {
  var resource = event.resource;
  var commits = resource.commits || [];
  console.log(
    "Push to %s/%s: %d commits by %s",
    resource.repository.name,
    resource.refUpdates[0].name,
    commits.length,
    resource.pushedBy.displayName
  );
});

onEvent("workitem.updated", function(event) {
  var resource = event.resource;
  var fields = resource.revision.fields;
  console.log(
    "Work item %d updated: %s",
    resource.workItemId,
    fields["System.Title"]
  );
});

// Webhook endpoint
app.post(
  "/hooks/azure-devops",
  verifyHookSignature(process.env.HOOK_SECRET),
  function(req, res) {
    var event = req.body;

    // Respond immediately - process asynchronously
    res.status(200).json({ accepted: true });

    routeEvent(event).catch(function(err) {
      console.error("Event routing failed:", err.message);
    });
  }
);

app.listen(process.env.PORT || 3000, function() {
  console.log("Service hook receiver listening on port", process.env.PORT || 3000);
});

Idempotent Event Processing

Azure DevOps may deliver the same event more than once, especially after transient failures. Your consumer must be idempotent. The simplest approach is to track processed event IDs.

var processedEvents = new Map();
var MAX_CACHE_SIZE = 10000;

function isProcessed(eventId) {
  return processedEvents.has(eventId);
}

function markProcessed(eventId) {
  if (processedEvents.size >= MAX_CACHE_SIZE) {
    // Evict oldest entries
    var keys = Array.from(processedEvents.keys());
    for (var i = 0; i < 1000; i++) {
      processedEvents.delete(keys[i]);
    }
  }
  processedEvents.set(eventId, Date.now());
}

function processEventIdempotently(event, handler) {
  var eventId = event.id || event.resource.url;

  if (isProcessed(eventId)) {
    console.log("Skipping duplicate event:", eventId);
    return Promise.resolve();
  }

  markProcessed(eventId);
  return handler(event);
}

For production systems, replace the in-memory Map with Redis or your database. The in-memory approach does not survive restarts and does not work across multiple consumer instances.


Error Handling and Retry Logic

Azure DevOps retries failed webhook deliveries automatically, but the retry policy is limited: it retries up to three times with exponential backoff over roughly ten minutes. After that, the subscription is marked as disabled if failures persist.

On your consumer side, you need your own retry logic for downstream operations that fail after accepting the webhook.

function withRetry(operation, options) {
  var maxRetries = (options && options.maxRetries) || 3;
  var baseDelay = (options && options.baseDelay) || 1000;
  var maxDelay = (options && options.maxDelay) || 30000;

  return function retryWrapper() {
    var args = Array.prototype.slice.call(arguments);
    var attempt = 0;

    function tryOnce() {
      attempt++;
      return new Promise(function(resolve, reject) {
        try {
          var result = operation.apply(null, args);
          if (result && typeof result.then === "function") {
            result.then(resolve).catch(reject);
          } else {
            resolve(result);
          }
        } catch (err) {
          reject(err);
        }
      }).catch(function(err) {
        if (attempt >= maxRetries) {
          console.error(
            "Operation failed after %d attempts: %s",
            attempt,
            err.message
          );
          throw err;
        }

        var delay = Math.min(baseDelay * Math.pow(2, attempt - 1), maxDelay);
        var jitter = Math.floor(Math.random() * delay * 0.2);
        var totalDelay = delay + jitter;

        console.warn(
          "Attempt %d failed, retrying in %dms: %s",
          attempt,
          totalDelay,
          err.message
        );

        return new Promise(function(resolve) {
          setTimeout(resolve, totalDelay);
        }).then(tryOnce);
      });
    }

    return tryOnce();
  };
}

// Usage
var sendToAnalytics = withRetry(
  function(event) {
    return postToAnalyticsPlatform(event);
  },
  { maxRetries: 5, baseDelay: 2000 }
);

Dead Letter Queue

When retries are exhausted, you need a dead letter mechanism. Do not silently drop events.

var fs = require("fs");
var path = require("path");

function deadLetter(event, error) {
  var dlqDir = process.env.DLQ_DIR || path.join(__dirname, "dead-letters");

  try {
    if (!fs.existsSync(dlqDir)) {
      fs.mkdirSync(dlqDir, { recursive: true });
    }
  } catch (mkdirErr) {
    console.error("Cannot create DLQ directory:", mkdirErr.message);
    return;
  }

  var filename = (event.id || Date.now()) + ".json";
  var envelope = {
    event: event,
    error: {
      message: error.message,
      stack: error.stack
    },
    timestamp: new Date().toISOString(),
    attempts: event._retryCount || 0
  };

  try {
    fs.writeFileSync(
      path.join(dlqDir, filename),
      JSON.stringify(envelope, null, 2)
    );
    console.error("Event dead-lettered:", filename);
  } catch (writeErr) {
    console.error("Failed to write dead letter:", writeErr.message);
    console.error("Lost event:", JSON.stringify(event));
  }
}

Monitoring Service Hook Health

Azure DevOps exposes subscription health through the REST API. Build a monitoring routine that checks for disabled or failing subscriptions.

function checkSubscriptionHealth(orgUrl, pat, callback) {
  listSubscriptions(orgUrl, pat, function(err, subscriptions) {
    if (err) return callback(err);

    var report = {
      total: subscriptions.length,
      enabled: 0,
      disabled: 0,
      failing: [],
      stale: []
    };

    var now = new Date();
    var thirtyDaysAgo = new Date(now.getTime() - 30 * 24 * 60 * 60 * 1000);

    subscriptions.forEach(function(sub) {
      if (sub.status === "enabled" || sub.status === "enabledWithRestrictions") {
        report.enabled++;
      } else {
        report.disabled++;
      }

      // Check for repeated failures
      if (sub.lastProbationRetryDate) {
        report.failing.push({
          id: sub.id,
          eventType: sub.eventType,
          consumer: sub.consumerId,
          lastFailure: sub.lastProbationRetryDate
        });
      }

      // Check for stale subscriptions (no activity in 30 days)
      var lastTriggered = sub.modifiedDate ? new Date(sub.modifiedDate) : null;
      if (lastTriggered && lastTriggered < thirtyDaysAgo) {
        report.stale.push({
          id: sub.id,
          eventType: sub.eventType,
          lastActive: sub.modifiedDate
        });
      }
    });

    callback(null, report);
  });
}

// Run health check
checkSubscriptionHealth(
  "https://dev.azure.com/my-org",
  process.env.AZURE_DEVOPS_PAT,
  function(err, report) {
    if (err) return console.error("Health check failed:", err.message);

    console.log("Subscription Health Report");
    console.log("  Total:", report.total);
    console.log("  Enabled:", report.enabled);
    console.log("  Disabled:", report.disabled);
    console.log("  Failing:", report.failing.length);
    console.log("  Stale (30d):", report.stale.length);

    if (report.failing.length > 0) {
      console.log("\nFailing subscriptions:");
      report.failing.forEach(function(f) {
        console.log("  [%s] %s -> %s (last failure: %s)",
          f.id, f.eventType, f.consumer, f.lastFailure);
      });
    }
  }
);

Scaling Service Hook Consumers

Horizontal Scaling Concerns

When you run multiple instances of your webhook receiver behind a load balancer, every instance might receive the same event. This is fine if your processing is idempotent (it should be), but it wastes compute. Use a distributed lock or a message queue to ensure single-processing.

// Simple Redis-based deduplication for multi-instance deployments
var redis = require("redis");

function createDeduplicator(redisUrl) {
  var client = redis.createClient({ url: redisUrl });

  client.on("error", function(err) {
    console.error("Redis dedup error:", err.message);
  });

  client.connect();

  return {
    tryProcess: function(eventId, ttlSeconds, callback) {
      var key = "hook:processed:" + eventId;

      client.set(key, "1", { NX: true, EX: ttlSeconds || 3600 })
        .then(function(result) {
          // result is "OK" if the key was set (first processor), null if already exists
          callback(null, result === "OK");
        })
        .catch(function(err) {
          // On Redis failure, allow processing (fail open)
          console.warn("Redis dedup failed, allowing processing:", err.message);
          callback(null, true);
        });
    }
  };
}

Backpressure Management

If events arrive faster than you can process them, you need backpressure. A simple bounded queue prevents memory exhaustion.

function createBoundedQueue(maxSize, processor) {
  var queue = [];
  var processing = false;
  var dropped = 0;

  function processNext() {
    if (queue.length === 0) {
      processing = false;
      return;
    }

    processing = true;
    var event = queue.shift();

    processor(event)
      .catch(function(err) {
        console.error("Queue processor error:", err.message);
      })
      .then(function() {
        setImmediate(processNext);
      });
  }

  return {
    enqueue: function(event) {
      if (queue.length >= maxSize) {
        dropped++;
        if (dropped % 100 === 0) {
          console.warn("Event queue full, dropped %d events", dropped);
        }
        return false;
      }
      queue.push(event);
      if (!processing) {
        processNext();
      }
      return true;
    },
    stats: function() {
      return { queued: queue.length, dropped: dropped, processing: processing };
    }
  };
}

Building a Custom Consumer Extension

Custom consumer extensions are Azure DevOps extensions that register new consumers in the service hooks framework. They appear in the service hooks UI alongside built-in consumers and let you define custom input fields, payload transformation, and consumer actions.

Extension Manifest

Create a vss-extension.json file:

{
  "manifestVersion": 1,
  "id": "custom-analytics-hook",
  "version": "1.0.0",
  "name": "Analytics Platform Service Hook",
  "publisher": "your-publisher-id",
  "description": "Send Azure DevOps events to your custom analytics platform",
  "categories": ["Integrate"],
  "targets": [
    {
      "id": "Microsoft.VisualStudio.Services"
    }
  ],
  "contributions": [
    {
      "id": "analytics-consumer",
      "type": "ms.vss-servicehooks.consumer",
      "targets": ["ms.vss-servicehooks.consumers"],
      "properties": {
        "id": "analyticsConsumer",
        "name": "Analytics Platform",
        "description": "Send events to your analytics platform",
        "informationUrl": "https://docs.example.com/azure-devops-integration",
        "inputDescriptors": [
          {
            "id": "apiEndpoint",
            "name": "API Endpoint",
            "description": "The URL of your analytics platform API",
            "inputMode": "textBox",
            "isConfidential": false,
            "validation": {
              "dataType": "uri",
              "isRequired": true
            }
          },
          {
            "id": "apiKey",
            "name": "API Key",
            "description": "Your analytics platform API key",
            "inputMode": "passwordBox",
            "isConfidential": true,
            "validation": {
              "dataType": "string",
              "isRequired": true
            }
          },
          {
            "id": "environment",
            "name": "Environment",
            "description": "Target environment label for event tagging",
            "inputMode": "combo",
            "isConfidential": false,
            "validation": {
              "dataType": "string",
              "isRequired": true
            },
            "values": {
              "possibleValues": [
                { "value": "production", "displayValue": "Production" },
                { "value": "staging", "displayValue": "Staging" },
                { "value": "development", "displayValue": "Development" }
              ],
              "defaultValue": "production"
            }
          }
        ],
        "actions": [
          {
            "id": "sendEvent",
            "name": "Send Event",
            "description": "Send formatted event data to the analytics platform",
            "supportedEventTypes": [
              "git.push",
              "git.pullrequest.created",
              "git.pullrequest.merged",
              "build.complete",
              "workitem.created",
              "workitem.updated"
            ],
            "publishEvent": {
              "url": "{{{apiEndpoint}}}",
              "httpHeaders": "X-API-Key:{{{apiKey}}}\nContent-Type:application/json",
              "resourceDetailsToSend": "all",
              "messagesToSend": "all",
              "detailedMessagesToSend": "all"
            }
          }
        ]
      }
    }
  ],
  "files": []
}

Packaging and Publishing

# Package the extension
tfx extension create --manifest-globs vss-extension.json

# Publish to marketplace (or share privately with your org)
tfx extension publish --manifest-globs vss-extension.json --share-with your-org-name

# Update an existing extension
tfx extension publish --manifest-globs vss-extension.json --rev-version

Once installed, the "Analytics Platform" consumer appears in your project's Service Hooks configuration page. Project admins can create subscriptions, select event types, and fill in the API endpoint, API key, and environment fields without touching code.


Testing Service Hooks Locally

Testing service hooks against a local development server requires tunneling. Azure DevOps cannot reach localhost directly.

Using ngrok for Local Testing

# Start your local webhook receiver
node server.js

# In another terminal, create a tunnel
ngrok http 3000

Use the ngrok HTTPS URL as your webhook endpoint when creating a test subscription. Remember to delete the subscription when you are done; orphaned subscriptions pointing at dead ngrok URLs generate noise in your subscription health reports.

Simulating Events Locally

For unit testing, capture a real event payload and replay it against your handler.

var http = require("http");
var fs = require("fs");

function replayEvent(payloadPath, port) {
  var payload = fs.readFileSync(payloadPath, "utf8");

  var options = {
    hostname: "localhost",
    port: port || 3000,
    path: "/hooks/azure-devops",
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "X-Hook-Secret": process.env.HOOK_SECRET || "test-secret",
      "Content-Length": Buffer.byteLength(payload)
    }
  };

  var req = http.request(options, function(res) {
    var data = "";
    res.on("data", function(chunk) { data += chunk; });
    res.on("end", function() {
      console.log("Response:", res.statusCode, data);
    });
  });

  req.on("error", function(err) {
    console.error("Replay failed:", err.message);
  });

  req.write(payload);
  req.end();
}

// Capture a payload from a real webhook call, save it, then replay:
// node replay.js ./test-payloads/build-complete.json
if (require.main === module) {
  var payloadPath = process.argv[2];
  if (!payloadPath) {
    console.error("Usage: node replay.js <payload.json>");
    process.exit(1);
  }
  replayEvent(payloadPath);
}

Complete Working Example

This example ties everything together: a Node.js service that receives Azure DevOps service hook events, transforms them into a normalized analytics format, sends them to a custom analytics platform with retry logic, and includes health monitoring.

// analytics-hook-service.js
var express = require("express");
var https = require("https");
var fs = require("fs");
var path = require("path");

var app = express();
app.use(express.json({ limit: "2mb" }));

var PORT = process.env.PORT || 3000;
var HOOK_SECRET = process.env.HOOK_SECRET;
var ANALYTICS_URL = process.env.ANALYTICS_URL;
var ANALYTICS_KEY = process.env.ANALYTICS_KEY;

// ---- Metrics ----
var metrics = {
  received: 0,
  processed: 0,
  failed: 0,
  duplicates: 0,
  startTime: Date.now()
};

// ---- Deduplication ----
var processedIds = new Map();

function isDuplicate(eventId) {
  if (processedIds.has(eventId)) {
    return true;
  }
  processedIds.set(eventId, Date.now());

  // Evict old entries every 1000 events
  if (processedIds.size > 5000) {
    var cutoff = Date.now() - 3600000; // 1 hour
    processedIds.forEach(function(timestamp, key) {
      if (timestamp < cutoff) {
        processedIds.delete(key);
      }
    });
  }
  return false;
}

// ---- Event Transformer ----
function transformEvent(rawEvent) {
  var resource = rawEvent.resource || {};
  var baseEvent = {
    source: "azure-devops",
    eventId: rawEvent.id,
    eventType: rawEvent.eventType,
    timestamp: rawEvent.createdDate || new Date().toISOString(),
    project: (rawEvent.resourceContainers && rawEvent.resourceContainers.project)
      ? rawEvent.resourceContainers.project.id
      : "unknown",
    organization: (rawEvent.resourceContainers && rawEvent.resourceContainers.account)
      ? rawEvent.resourceContainers.account.id
      : "unknown"
  };

  switch (rawEvent.eventType) {
    case "build.complete":
      baseEvent.category = "ci";
      baseEvent.details = {
        buildId: resource.id,
        definition: resource.definition ? resource.definition.name : "unknown",
        status: resource.status,
        result: resource.result,
        requestedBy: resource.requestedBy ? resource.requestedBy.displayName : "unknown",
        startTime: resource.startTime,
        finishTime: resource.finishTime,
        duration: resource.startTime && resource.finishTime
          ? new Date(resource.finishTime) - new Date(resource.startTime)
          : null,
        sourceBranch: resource.sourceBranch
      };
      break;

    case "git.push":
      baseEvent.category = "scm";
      baseEvent.details = {
        repository: resource.repository ? resource.repository.name : "unknown",
        pushedBy: resource.pushedBy ? resource.pushedBy.displayName : "unknown",
        commitCount: resource.commits ? resource.commits.length : 0,
        branch: (resource.refUpdates && resource.refUpdates[0])
          ? resource.refUpdates[0].name
          : "unknown"
      };
      break;

    case "git.pullrequest.created":
    case "git.pullrequest.merged":
      baseEvent.category = "scm";
      baseEvent.details = {
        pullRequestId: resource.pullRequestId,
        title: resource.title,
        createdBy: resource.createdBy ? resource.createdBy.displayName : "unknown",
        repository: resource.repository ? resource.repository.name : "unknown",
        sourceBranch: resource.sourceRefName,
        targetBranch: resource.targetRefName,
        status: resource.status,
        mergeStatus: resource.mergeStatus
      };
      break;

    case "workitem.created":
    case "workitem.updated":
      baseEvent.category = "tracking";
      var fields = (resource.revision && resource.revision.fields) || {};
      baseEvent.details = {
        workItemId: resource.workItemId || resource.id,
        type: fields["System.WorkItemType"],
        title: fields["System.Title"],
        state: fields["System.State"],
        assignedTo: fields["System.AssignedTo"]
          ? fields["System.AssignedTo"].displayName
          : "unassigned",
        priority: fields["Microsoft.VSTS.Common.Priority"]
      };
      break;

    default:
      baseEvent.category = "other";
      baseEvent.details = { raw: resource };
  }

  return baseEvent;
}

// ---- Analytics Sender with Retry ----
function sendToAnalytics(event, attempt, callback) {
  if (!attempt) attempt = 1;
  var maxAttempts = 4;

  var body = JSON.stringify(event);
  var url = new URL(ANALYTICS_URL);

  var options = {
    hostname: url.hostname,
    port: url.port || 443,
    path: url.pathname,
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "X-API-Key": ANALYTICS_KEY,
      "X-Event-Id": event.eventId,
      "Content-Length": Buffer.byteLength(body)
    }
  };

  var req = https.request(options, function(res) {
    var data = "";
    res.on("data", function(chunk) { data += chunk; });
    res.on("end", function() {
      if (res.statusCode >= 200 && res.statusCode < 300) {
        callback(null);
      } else if (res.statusCode >= 500 && attempt < maxAttempts) {
        var delay = Math.pow(2, attempt) * 1000 + Math.floor(Math.random() * 500);
        console.warn(
          "Analytics returned %d, retry %d/%d in %dms",
          res.statusCode, attempt, maxAttempts, delay
        );
        setTimeout(function() {
          sendToAnalytics(event, attempt + 1, callback);
        }, delay);
      } else {
        callback(new Error("Analytics HTTP " + res.statusCode + ": " + data));
      }
    });
  });

  req.on("error", function(err) {
    if (attempt < maxAttempts) {
      var delay = Math.pow(2, attempt) * 1000;
      setTimeout(function() {
        sendToAnalytics(event, attempt + 1, callback);
      }, delay);
    } else {
      callback(err);
    }
  });

  req.write(body);
  req.end();
}

// ---- Dead Letter ----
function deadLetter(event, error) {
  var dlqDir = path.join(__dirname, "dead-letters");
  try {
    if (!fs.existsSync(dlqDir)) fs.mkdirSync(dlqDir, { recursive: true });
  } catch (e) { /* ignore */ }

  var filename = (event.eventId || Date.now()) + ".json";
  var envelope = {
    event: event,
    error: { message: error.message, stack: error.stack },
    deadLetteredAt: new Date().toISOString()
  };

  try {
    fs.writeFileSync(path.join(dlqDir, filename), JSON.stringify(envelope, null, 2));
  } catch (writeErr) {
    console.error("DLQ write failed:", writeErr.message);
    console.error("Lost event:", JSON.stringify(event).substring(0, 500));
  }
}

// ---- Routes ----

// Authentication middleware
app.use("/hooks", function(req, res, next) {
  var secret = req.headers["x-hook-secret"];
  if (!HOOK_SECRET || secret === HOOK_SECRET) {
    return next();
  }
  res.status(401).json({ error: "Unauthorized" });
});

// Main webhook endpoint
app.post("/hooks/azure-devops", function(req, res) {
  metrics.received++;
  var rawEvent = req.body;

  // Respond immediately
  res.status(200).json({ accepted: true, eventId: rawEvent.id });

  // Deduplicate
  if (isDuplicate(rawEvent.id)) {
    metrics.duplicates++;
    return;
  }

  // Transform
  var analyticsEvent;
  try {
    analyticsEvent = transformEvent(rawEvent);
  } catch (err) {
    console.error("Transform error:", err.message);
    metrics.failed++;
    return;
  }

  // Send
  sendToAnalytics(analyticsEvent, 1, function(err) {
    if (err) {
      console.error("Analytics delivery failed:", err.message);
      metrics.failed++;
      deadLetter(analyticsEvent, err);
    } else {
      metrics.processed++;
    }
  });
});

// Health and metrics endpoints
app.get("/health", function(req, res) {
  res.json({
    status: "ok",
    uptime: Math.floor((Date.now() - metrics.startTime) / 1000),
    metrics: {
      received: metrics.received,
      processed: metrics.processed,
      failed: metrics.failed,
      duplicates: metrics.duplicates,
      successRate: metrics.received > 0
        ? ((metrics.processed / metrics.received) * 100).toFixed(1) + "%"
        : "N/A"
    }
  });
});

// Dead letter inspection
app.get("/admin/dead-letters", function(req, res) {
  var dlqDir = path.join(__dirname, "dead-letters");
  try {
    var files = fs.readdirSync(dlqDir);
    var letters = files.slice(-50).map(function(f) {
      var content = JSON.parse(fs.readFileSync(path.join(dlqDir, f), "utf8"));
      return { file: f, eventType: content.event.eventType, error: content.error.message };
    });
    res.json({ count: files.length, recent: letters });
  } catch (err) {
    res.json({ count: 0, recent: [] });
  }
});

app.listen(PORT, function() {
  console.log("Analytics hook service started on port", PORT);
  console.log("Webhook endpoint: POST /hooks/azure-devops");
  console.log("Health check: GET /health");
});

Setting Up the Subscription

After deploying this service, create the subscription:

// setup-subscription.js
var https = require("https");

var ORG_URL = "https://dev.azure.com/my-org";
var PAT = process.env.AZURE_DEVOPS_PAT;
var PROJECT_ID = process.env.AZURE_DEVOPS_PROJECT_ID;
var CONSUMER_URL = process.env.ANALYTICS_HOOK_URL;

var eventTypes = [
  "build.complete",
  "git.push",
  "git.pullrequest.created",
  "git.pullrequest.merged",
  "workitem.created",
  "workitem.updated"
];

function createSub(eventType, callback) {
  var body = JSON.stringify({
    publisherId: "tfs",
    eventType: eventType,
    consumerId: "webHooks",
    consumerActionId: "httpRequest",
    publisherInputs: { projectId: PROJECT_ID },
    consumerInputs: {
      url: CONSUMER_URL + "/hooks/azure-devops",
      httpHeaders: "X-Hook-Secret:" + process.env.HOOK_SECRET,
      resourceDetailsToSend: "all",
      messagesToSend: "all",
      detailedMessagesToSend: "all"
    }
  });

  var auth = Buffer.from(":" + PAT).toString("base64");
  var url = new URL(ORG_URL + "/_apis/hooks/subscriptions?api-version=7.1");

  var options = {
    hostname: url.hostname,
    path: url.pathname + url.search,
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "Authorization": "Basic " + auth,
      "Content-Length": Buffer.byteLength(body)
    }
  };

  var req = https.request(options, function(res) {
    var data = "";
    res.on("data", function(chunk) { data += chunk; });
    res.on("end", function() {
      if (res.statusCode >= 200 && res.statusCode < 300) {
        var result = JSON.parse(data);
        console.log("[OK] %s -> %s", eventType, result.id);
        callback(null);
      } else {
        console.error("[FAIL] %s -> HTTP %d", eventType, res.statusCode);
        callback(new Error(data));
      }
    });
  });

  req.on("error", callback);
  req.write(body);
  req.end();
}

// Create subscriptions sequentially
var index = 0;
function next() {
  if (index >= eventTypes.length) {
    console.log("\nAll subscriptions created.");
    return;
  }
  createSub(eventTypes[index], function(err) {
    index++;
    if (err) console.warn("  Continuing despite error...");
    setTimeout(next, 500); // Rate limit courtesy
  });
}

next();

Common Issues and Troubleshooting

1. Subscriptions auto-disable after repeated failures. Azure DevOps disables subscriptions after consecutive delivery failures. Check your consumer's availability and response times. The endpoint must respond within 25 seconds. If your processing takes longer, accept the webhook immediately with a 200 response and process asynchronously. Re-enable disabled subscriptions via the REST API or the web UI after fixing the root cause.

2. Events arrive with stale or incomplete resource data. By default, service hooks send minimal resource details. Set resourceDetailsToSend, messagesToSend, and detailedMessagesToSend to "all" in your consumer inputs. Without these flags, you get resource URLs instead of full resource objects, forcing extra API calls to hydrate the data.

3. Duplicate events trigger duplicate processing. Azure DevOps does not guarantee exactly-once delivery. Network timeouts, retries, and subscription configuration changes can all cause duplicates. Always implement idempotent processing using event IDs as deduplication keys. The id field in the event payload is your primary deduplication key.

4. Custom consumer extension inputs are not validated at subscription creation time. If your extension defines input fields with validation rules, those rules are enforced by the UI but not by the REST API. Programmatically created subscriptions can contain invalid inputs that only fail at delivery time. Validate inputs in your consumer endpoint, not just in the extension manifest.

5. Service hooks do not fire for events in deleted or archived projects. If you reorganize projects and some become archived, subscriptions bound to those projects silently stop firing. They do not show up as failures because no events are generated. Audit your subscriptions periodically against active projects.

6. Webhook payloads exceed expected size for work item events. Work item events with rich text fields (HTML descriptions, acceptance criteria) can produce payloads exceeding 1MB. Configure your Express body parser with an appropriate limit and handle PayloadTooLargeError gracefully rather than crashing the process.


Best Practices

  • Respond to webhooks immediately. Accept the event with a 200 response before processing it. Azure DevOps has a 25-second timeout, and any processing that involves external calls risks exceeding it. Queue the event internally and process it asynchronously.

  • Implement idempotent processing from day one. Do not wait until you observe duplicates in production. Use the event id field as a deduplication key with a TTL-based cache (Redis for multi-instance, in-memory Map for single-instance).

  • Filter subscriptions aggressively. Every subscription that does not filter by project, repository, or build definition generates unnecessary evaluation overhead. A subscription that fires on every git.push across an organization with fifty repositories creates fifty times more load than one filtered to a single repository.

  • Monitor subscription health programmatically. Build a scheduled job that queries the subscriptions API and alerts on disabled or failing subscriptions. Stale subscriptions are invisible failures; you will not notice them until someone asks why events stopped arriving.

  • Use separate subscriptions for separate concerns. Do not create one catch-all subscription that routes every event type to the same consumer endpoint. Use one subscription per event type. This gives you independent failure isolation, cleaner filtering, and the ability to disable specific event types without affecting others.

  • Version your event processing logic. When the Azure DevOps event schema changes (and it does, especially for newer event types like pipeline run events), your transform functions must handle both old and new formats. Use defensive property access and test against captured payloads from production.

  • Secure your webhook endpoints. Use the X-Hook-Secret header or HMAC signature verification. Service hook URLs are stored in Azure DevOps and visible to project administrators. Without authentication, anyone who discovers the URL can send fabricated events.

  • Implement dead letter queues for failed events. When retries are exhausted, write the failed event to a dead letter store (file system, database, or queue). This preserves the event for manual inspection and reprocessing. Silently dropping events is a data loss bug.

  • Test with captured production payloads. Azure DevOps event payloads have subtle differences between event types and API versions. Synthetic test payloads miss edge cases. Capture real payloads early, sanitize sensitive data, and use them as your test fixtures.


References

Powered by Contentful