Integrations

Datadog Integration with Azure DevOps

Integrate Datadog with Azure DevOps for CI Visibility, deployment tracking, APM correlation, and pipeline observability

Datadog Integration with Azure DevOps

Datadog and Azure DevOps together give you full observability from the moment a developer pushes a commit to the moment that code is serving traffic in production. This integration closes the gap between your CI/CD pipeline and your monitoring stack, letting you correlate deployments with performance regressions, track pipeline reliability over time, and trigger incident workflows when things go wrong. In this article, we will wire up Datadog CI Visibility, deployment event tracking, APM correlation, custom metrics, and build a Node.js service that ties it all together.

Prerequisites

Before you start, make sure you have the following in place:

  • An Azure DevOps organization with at least one project and pipeline
  • A Datadog account (Team plan or higher for CI Visibility features)
  • A Datadog API key and Application key (generate both under Organization Settings > API Keys)
  • Node.js v16 or later installed locally
  • The dd-trace npm package for APM instrumentation
  • Basic familiarity with YAML-based Azure Pipelines
  • The Datadog Azure DevOps extension installed from the Visual Studio Marketplace

Datadog CI Visibility for Azure Pipelines

Datadog CI Visibility gives you trace-level insight into your pipeline runs. Every stage, job, and step becomes a span in a trace, letting you identify bottlenecks the same way you would in a distributed application.

Installing the Extension

Install the Datadog CI Visibility extension from the Visual Studio Marketplace into your Azure DevOps organization. Once installed, you need to configure a service connection.

Navigate to Project Settings > Service connections > New service connection and select Datadog CI Visibility. Enter your Datadog API key and select your Datadog site (e.g., datadoghq.com or datadoghq.eu).

Enabling CI Visibility in Your Pipeline

Add the Datadog tasks to your pipeline YAML:

trigger:
  - main

pool:
  vmImage: 'ubuntu-latest'

steps:
  - task: DatadogCIVisibilityStart@1
    displayName: 'Start Datadog CI Visibility'
    inputs:
      datadogServiceConnection: 'datadog-ci-visibility'
      enableCustomTags: true
      tags: |
        team:platform
        service:api-gateway

  - script: |
      npm ci
      npm test
    displayName: 'Install and Test'

  - script: |
      npm run build
    displayName: 'Build Application'

  - task: DatadogCIVisibilityEnd@1
    displayName: 'End Datadog CI Visibility'

The Start task instruments the pipeline run and the End task finalizes the trace and ships it to Datadog. Every step between them becomes a span in the pipeline trace.

Custom Tags and Metadata

Custom tags are how you slice and dice pipeline data in Datadog. Tag by team, service, environment, or anything else that matters to your organization:

- task: DatadogCIVisibilityStart@1
  inputs:
    datadogServiceConnection: 'datadog-ci-visibility'
    tags: |
      team:$(TEAM_NAME)
      service:$(Build.Repository.Name)
      branch:$(Build.SourceBranch)
      commit_sha:$(Build.SourceVersion)

These tags show up in the CI Visibility explorer and can be used in monitors, dashboards, and alerting rules.

Sending Deployment Events to Datadog

Deployment events let you overlay release markers on your dashboards. When you see a latency spike at 2:14 PM and a deployment event at 2:12 PM, the root cause is obvious.

Using the Datadog API from a Pipeline Step

- script: |
    curl -X POST "https://api.datadoghq.com/api/v1/events" \
      -H "Content-Type: application/json" \
      -H "DD-API-KEY: $(DD_API_KEY)" \
      -d '{
        "title": "Deployment: $(Build.Repository.Name)",
        "text": "Build $(Build.BuildNumber) deployed to $(ENVIRONMENT) from branch $(Build.SourceBranch)",
        "tags": [
          "environment:$(ENVIRONMENT)",
          "service:$(Build.Repository.Name)",
          "version:$(Build.BuildNumber)",
          "commit:$(Build.SourceVersion)"
        ],
        "alert_type": "info",
        "source_type_name": "azure_devops"
      }'
  displayName: 'Send Deployment Event to Datadog'
  env:
    DD_API_KEY: $(DD_API_KEY)

Using the Deployment Tracking API

Datadog's Deployment Tracking API is more structured than raw events. It ties deployments to specific services and versions:

- script: |
    curl -X POST "https://api.datadoghq.com/api/v2/deployments" \
      -H "Content-Type: application/json" \
      -H "DD-API-KEY: $(DD_API_KEY)" \
      -H "DD-APPLICATION-KEY: $(DD_APP_KEY)" \
      -d '{
        "data": {
          "type": "deployment",
          "attributes": {
            "service": "$(Build.Repository.Name)",
            "version": "$(Build.BuildNumber)",
            "environment": "$(ENVIRONMENT)",
            "started_at": "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'",
            "status": "success",
            "git_sha": "$(Build.SourceVersion)",
            "git_repository_url": "$(Build.Repository.Uri)"
          }
        }
      }'
  displayName: 'Track Deployment in Datadog'

Pipeline Trace Collection

Beyond CI Visibility, you can send pipeline traces as APM traces, which lets you use all of Datadog's trace analytics features against your CI data.

Configuring Trace Collection

Set the following environment variables in your pipeline to send traces via the Datadog Agent or directly to the intake:

variables:
  DD_CIVISIBILITY_AGENTLESS_ENABLED: 'true'
  DD_API_KEY: $(DD_API_KEY)
  DD_SITE: 'datadoghq.com'
  DD_ENV: 'ci'
  DD_SERVICE: '$(Build.Repository.Name)-pipeline'

steps:
  - script: |
      npm install -g @datadog/datadog-ci
      datadog-ci tag --level pipeline \
        --tags "git.branch:$(Build.SourceBranch)" \
        --tags "git.commit.sha:$(Build.SourceVersion)" \
        --tags "ci.pipeline.name:$(Build.DefinitionName)"
    displayName: 'Tag Pipeline Trace'

The datadog-ci CLI tool gives you fine-grained control over what metadata is attached to each pipeline trace.

Linking Commits to Deployments in Datadog

Connecting git commits to deployment events lets you click from a Datadog alert straight to the commit that caused the issue. This is one of the most underrated features in the integration.

Source Code Integration Setup

First, connect your Azure DevOps repository to Datadog under Integrations > Azure DevOps. Then configure your pipeline to send commit metadata:

- script: |
    COMMIT_MESSAGE=$(git log -1 --pretty=format:'%s')
    AUTHOR_NAME=$(git log -1 --pretty=format:'%an')
    AUTHOR_EMAIL=$(git log -1 --pretty=format:'%ae')

    curl -X POST "https://api.datadoghq.com/api/v1/events" \
      -H "Content-Type: application/json" \
      -H "DD-API-KEY: $(DD_API_KEY)" \
      -d "{
        \"title\": \"Commit deployed: ${COMMIT_MESSAGE}\",
        \"text\": \"Author: ${AUTHOR_NAME} (${AUTHOR_EMAIL})\nSHA: $(Build.SourceVersion)\nBranch: $(Build.SourceBranch)\",
        \"tags\": [
          \"git.commit.sha:$(Build.SourceVersion)\",
          \"git.repository_url:$(Build.Repository.Uri)\",
          \"service:$(Build.Repository.Name)\"
        ],
        \"source_type_name\": \"azure_devops\"
      }"
  displayName: 'Link Commit to Deployment'

Monitoring Node.js Applications with Datadog APM

Once your pipeline is instrumented, the next step is connecting your application's APM data so you can correlate deployments with runtime performance.

Instrumenting a Node.js Service

Install the tracer and initialize it before anything else in your application:

npm install dd-trace --save

Create a tracer initialization file:

// tracer.js
var tracer = require('dd-trace');

tracer.init({
  service: 'api-gateway',
  env: process.env.NODE_ENV || 'development',
  version: process.env.APP_VERSION || '1.0.0',
  logInjection: true,
  runtimeMetrics: true,
  profiling: true,
  tags: {
    'team': 'platform',
    'pipeline.build_number': process.env.BUILD_NUMBER || 'local'
  }
});

module.exports = tracer;

Require the tracer at the very top of your entry point:

// app.js
require('./tracer');
var express = require('express');
var app = express();

app.get('/health', function(req, res) {
  res.json({ status: 'ok', version: process.env.APP_VERSION });
});

app.get('/api/users', function(req, res) {
  var span = require('dd-trace').scope().active();
  if (span) {
    span.setTag('user.query_type', 'list');
  }
  // ... handler logic
});

var port = process.env.PORT || 3000;
app.listen(port, function() {
  console.log('Server running on port ' + port);
});

Connecting APM to Deployments

The key is the version tag. When your pipeline sets APP_VERSION to the build number, Datadog automatically correlates APM traces with deployment events:

- task: Docker@2
  inputs:
    command: 'buildAndPush'
    arguments: '--build-arg APP_VERSION=$(Build.BuildNumber)'

In your Dockerfile:

ARG APP_VERSION=unknown
ENV APP_VERSION=$APP_VERSION
ENV DD_VERSION=$APP_VERSION

Creating Datadog Monitors from Pipeline Data

Monitors turn your CI data into actionable alerts. Here are the monitors every team should have.

Pipeline Failure Rate Monitor

// create-monitors.js
var https = require('https');

var monitors = [
  {
    name: 'Pipeline Failure Rate > 20%',
    type: 'ci-pipelines',
    query: 'ci_pipeline_failure_rate("pipeline_name:api-gateway-build").rollup("avg", 3600) > 0.2',
    message: 'Pipeline failure rate for api-gateway is above 20%. Check recent commits. @slack-platform-alerts',
    tags: ['team:platform', 'service:api-gateway'],
    options: {
      thresholds: { critical: 0.2, warning: 0.1 },
      notify_no_data: false,
      renotify_interval: 60
    }
  },
  {
    name: 'Pipeline Duration Regression',
    type: 'ci-pipelines',
    query: 'avg(last_1h):avg:ci.pipeline.duration{pipeline_name:api-gateway-build} > 600',
    message: 'Pipeline duration for api-gateway exceeded 10 minutes. Investigate slow steps. @pagerduty-platform',
    tags: ['team:platform', 'service:api-gateway'],
    options: {
      thresholds: { critical: 600, warning: 480 },
      notify_no_data: false
    }
  }
];

function createMonitor(monitor, callback) {
  var data = JSON.stringify(monitor);
  var options = {
    hostname: 'api.datadoghq.com',
    path: '/api/v1/monitor',
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'DD-API-KEY': process.env.DD_API_KEY,
      'DD-APPLICATION-KEY': process.env.DD_APP_KEY,
      'Content-Length': Buffer.byteLength(data)
    }
  };

  var req = https.request(options, function(res) {
    var body = '';
    res.on('data', function(chunk) { body += chunk; });
    res.on('end', function() {
      callback(null, JSON.parse(body));
    });
  });

  req.on('error', function(err) { callback(err); });
  req.write(data);
  req.end();
}

monitors.forEach(function(monitor) {
  createMonitor(monitor, function(err, result) {
    if (err) {
      console.error('Failed to create monitor:', err.message);
      return;
    }
    console.log('Created monitor:', result.name, '(ID:', result.id + ')');
  });
});

Incident Management Integration

When a monitor fires, you want it to create an incident automatically. Datadog's incident management can be triggered from pipeline failures.

Auto-Creating Incidents from Pipeline Failures

// incident-handler.js
var https = require('https');

function createIncident(pipelineData, callback) {
  var incident = {
    data: {
      type: 'incidents',
      attributes: {
        title: 'Pipeline Failure: ' + pipelineData.pipelineName + ' #' + pipelineData.buildNumber,
        customer_impact_scope: 'Deployment blocked for ' + pipelineData.service,
        customer_impact_start: new Date().toISOString(),
        fields: {
          severity: { type: 'dropdown', value: 'SEV-3' },
          detection_method: { type: 'dropdown', value: 'monitor' },
          root_cause: { type: 'textbox', value: 'Pipeline failure on branch ' + pipelineData.branch },
          services: { type: 'autocomplete', value: pipelineData.service }
        }
      },
      relationships: {
        commander_user: {
          data: {
            type: 'users',
            id: pipelineData.oncallUserId
          }
        }
      }
    }
  };

  var data = JSON.stringify(incident);
  var options = {
    hostname: 'api.datadoghq.com',
    path: '/api/v2/incidents',
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'DD-API-KEY': process.env.DD_API_KEY,
      'DD-APPLICATION-KEY': process.env.DD_APP_KEY,
      'Content-Length': Buffer.byteLength(data)
    }
  };

  var req = https.request(options, function(res) {
    var body = '';
    res.on('data', function(chunk) { body += chunk; });
    res.on('end', function() {
      var result = JSON.parse(body);
      console.log('Incident created:', result.data.id);
      callback(null, result);
    });
  });

  req.on('error', function(err) { callback(err); });
  req.write(data);
  req.end();
}

module.exports = { createIncident: createIncident };

Custom Metrics from Pipelines

Custom metrics let you track anything that matters to your team: test counts, build artifact sizes, dependency vulnerability counts, coverage percentages.

Sending Custom Metrics from Pipeline Steps

- script: |
    TEST_COUNT=$(cat test-results.json | jq '.numTotalTests')
    TEST_PASS=$(cat test-results.json | jq '.numPassedTests')
    TEST_FAIL=$(cat test-results.json | jq '.numFailedTests')
    COVERAGE=$(cat coverage/coverage-summary.json | jq '.total.lines.pct')
    BUNDLE_SIZE=$(stat -c%s dist/bundle.js)

    # Send metrics via DogStatsD-compatible HTTP API
    curl -X POST "https://api.datadoghq.com/api/v2/series" \
      -H "Content-Type: application/json" \
      -H "DD-API-KEY: $(DD_API_KEY)" \
      -d "{
        \"series\": [
          {
            \"metric\": \"ci.tests.total\",
            \"type\": 0,
            \"points\": [{\"timestamp\": $(date +%s), \"value\": ${TEST_COUNT}}],
            \"tags\": [\"service:$(Build.Repository.Name)\", \"branch:$(Build.SourceBranch)\"]
          },
          {
            \"metric\": \"ci.tests.failed\",
            \"type\": 0,
            \"points\": [{\"timestamp\": $(date +%s), \"value\": ${TEST_FAIL}}],
            \"tags\": [\"service:$(Build.Repository.Name)\", \"branch:$(Build.SourceBranch)\"]
          },
          {
            \"metric\": \"ci.coverage.percentage\",
            \"type\": 0,
            \"points\": [{\"timestamp\": $(date +%s), \"value\": ${COVERAGE}}],
            \"tags\": [\"service:$(Build.Repository.Name)\", \"branch:$(Build.SourceBranch)\"]
          },
          {
            \"metric\": \"ci.bundle.size_bytes\",
            \"type\": 0,
            \"points\": [{\"timestamp\": $(date +%s), \"value\": ${BUNDLE_SIZE}}],
            \"tags\": [\"service:$(Build.Repository.Name)\", \"branch:$(Build.SourceBranch)\"]
          }
        ]
      }"
  displayName: 'Send Custom Metrics to Datadog'

Datadog Dashboards for CI/CD Metrics

A good CI/CD dashboard answers three questions: Are builds passing? Are they fast? Is deployment frequency healthy?

Creating a Dashboard via the API

// create-dashboard.js
var https = require('https');

var dashboard = {
  title: 'Azure DevOps CI/CD Overview',
  description: 'Pipeline health, deployment frequency, and test metrics',
  layout_type: 'ordered',
  widgets: [
    {
      definition: {
        title: 'Pipeline Success Rate (24h)',
        type: 'query_value',
        requests: [{
          queries: [{
            data_source: 'ci_pipelines',
            name: 'success_rate',
            query: 'ci_pipeline_success_rate("*").rollup("avg", 86400)'
          }],
          formulas: [{ formula: 'success_rate * 100' }]
        }],
        precision: 1,
        custom_unit: '%'
      }
    },
    {
      definition: {
        title: 'Pipeline Duration Over Time',
        type: 'timeseries',
        requests: [{
          queries: [{
            data_source: 'ci_pipelines',
            name: 'duration',
            query: 'avg:ci.pipeline.duration{*} by {pipeline_name}'
          }],
          display_type: 'line'
        }]
      }
    },
    {
      definition: {
        title: 'Deployments Per Day',
        type: 'timeseries',
        requests: [{
          queries: [{
            data_source: 'events',
            name: 'deployments',
            query: 'events("source:azure_devops deployment").rollup("count").by("environment")'
          }],
          display_type: 'bars'
        }]
      }
    },
    {
      definition: {
        title: 'Test Failure Trend',
        type: 'timeseries',
        requests: [{
          queries: [{
            data_source: 'metrics',
            name: 'test_failures',
            query: 'sum:ci.tests.failed{*} by {service}.rollup(sum, 3600)'
          }],
          display_type: 'line'
        }]
      }
    },
    {
      definition: {
        title: 'Pipeline Runs by Status',
        type: 'toplist',
        requests: [{
          queries: [{
            data_source: 'ci_pipelines',
            name: 'runs',
            query: 'ci_pipeline_runs("*").by("status").rollup("count", 86400)'
          }]
        }]
      }
    },
    {
      definition: {
        title: 'Code Coverage Trend',
        type: 'timeseries',
        requests: [{
          queries: [{
            data_source: 'metrics',
            name: 'coverage',
            query: 'avg:ci.coverage.percentage{*} by {service}'
          }],
          display_type: 'line',
          style: { line_type: 'solid', line_width: 'normal' }
        }]
      }
    }
  ]
};

function createDashboard(callback) {
  var data = JSON.stringify(dashboard);
  var options = {
    hostname: 'api.datadoghq.com',
    path: '/api/v1/dashboard',
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'DD-API-KEY': process.env.DD_API_KEY,
      'DD-APPLICATION-KEY': process.env.DD_APP_KEY,
      'Content-Length': Buffer.byteLength(data)
    }
  };

  var req = https.request(options, function(res) {
    var body = '';
    res.on('data', function(chunk) { body += chunk; });
    res.on('end', function() {
      var result = JSON.parse(body);
      console.log('Dashboard created:', result.url);
      callback(null, result);
    });
  });

  req.on('error', function(err) { callback(err); });
  req.write(data);
  req.end();
}

createDashboard(function(err, result) {
  if (err) {
    console.error('Failed to create dashboard:', err.message);
    process.exit(1);
  }
  console.log('Dashboard ID:', result.id);
});

Synthetic Test Integration

Synthetic tests verify your endpoints are healthy after every deployment. Trigger them from your pipeline to catch regressions before users do.

Triggering Synthetic Tests Post-Deployment

- script: |
    npm install -g @datadog/datadog-ci

    datadog-ci synthetics run-tests \
      --apiKey $(DD_API_KEY) \
      --appKey $(DD_APP_KEY) \
      --public-id abc-123-xyz \
      --public-id def-456-uvw \
      --tunnel \
      --failOnCriticalErrors \
      --variables "BASE_URL=https://staging.example.com,API_VERSION=$(Build.BuildNumber)"
  displayName: 'Run Datadog Synthetic Tests'
  condition: succeeded()

Configuring Synthetic Tests via Code

// synthetic-config.js
var https = require('https');

var syntheticTest = {
  name: 'API Health Check - Post Deployment',
  type: 'api',
  subtype: 'http',
  config: {
    request: {
      method: 'GET',
      url: 'https://api.example.com/health',
      headers: { 'Accept': 'application/json' },
      timeout: 10
    },
    assertions: [
      { type: 'statusCode', operator: 'is', target: 200 },
      { type: 'responseTime', operator: 'lessThan', target: 2000 },
      { type: 'body', operator: 'contains', target: '"status":"ok"' }
    ]
  },
  locations: ['aws:us-east-1', 'aws:eu-west-1'],
  options: {
    tick_every: 300,
    min_failure_duration: 0,
    min_location_failed: 1,
    retry: { count: 1, interval: 300 }
  },
  tags: ['service:api-gateway', 'team:platform', 'env:production'],
  message: 'API health check failed post-deployment. @slack-platform-alerts'
};

function createSyntheticTest(test, callback) {
  var data = JSON.stringify(test);
  var options = {
    hostname: 'api.datadoghq.com',
    path: '/api/v1/synthetics/tests/api',
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'DD-API-KEY': process.env.DD_API_KEY,
      'DD-APPLICATION-KEY': process.env.DD_APP_KEY,
      'Content-Length': Buffer.byteLength(data)
    }
  };

  var req = https.request(options, function(res) {
    var body = '';
    res.on('data', function(chunk) { body += chunk; });
    res.on('end', function() {
      callback(null, JSON.parse(body));
    });
  });

  req.on('error', function(err) { callback(err); });
  req.write(data);
  req.end();
}

createSyntheticTest(syntheticTest, function(err, result) {
  if (err) {
    console.error('Failed to create synthetic test:', err.message);
    return;
  }
  console.log('Synthetic test created:', result.public_id);
});

Error Tracking and Source Mapping

Source maps let Datadog show you the original source code in error stack traces instead of minified garbage.

Uploading Source Maps from Your Pipeline

- script: |
    npm install -g @datadog/datadog-ci

    datadog-ci sourcemaps upload ./dist \
      --service=$(Build.Repository.Name) \
      --release-version=$(Build.BuildNumber) \
      --minified-path-prefix=https://cdn.example.com/assets/ \
      --repository-url=$(Build.Repository.Uri)
  displayName: 'Upload Source Maps to Datadog'
  env:
    DATADOG_API_KEY: $(DD_API_KEY)

Configuring Error Tracking in Node.js

// error-tracking.js
var tracer = require('dd-trace');

tracer.init({
  service: 'api-gateway',
  version: process.env.APP_VERSION,
  env: process.env.NODE_ENV
});

// Custom error handler middleware
function datadogErrorHandler(err, req, res, next) {
  var span = tracer.scope().active();
  if (span) {
    span.setTag('error', true);
    span.setTag('error.message', err.message);
    span.setTag('error.stack', err.stack);
    span.setTag('error.type', err.constructor.name);
    span.setTag('http.route', req.route ? req.route.path : 'unknown');
    span.setTag('usr.id', req.user ? req.user.id : 'anonymous');
  }

  console.error('Unhandled error:', err.message);
  res.status(err.statusCode || 500).json({
    error: err.message,
    requestId: span ? span.context().toTraceId() : 'unknown'
  });
}

module.exports = { datadogErrorHandler: datadogErrorHandler };

Building a Datadog Integration Service with Node.js

Now let us bring everything together into a standalone Node.js service that acts as a webhook receiver for Azure DevOps and forwards events to Datadog. This service handles pipeline completion events, formats them into Datadog-compatible payloads, and manages the lifecycle of deployment tracking.

// datadog-sync-service.js
var express = require('express');
var https = require('https');
var crypto = require('crypto');

var app = express();
app.use(express.json());

var DD_API_KEY = process.env.DD_API_KEY;
var DD_APP_KEY = process.env.DD_APP_KEY;
var DD_SITE = process.env.DD_SITE || 'datadoghq.com';
var WEBHOOK_SECRET = process.env.AZURE_DEVOPS_WEBHOOK_SECRET;

// Verify Azure DevOps webhook signature
function verifyWebhookSignature(req, res, next) {
  if (!WEBHOOK_SECRET) {
    return next();
  }

  var payload = JSON.stringify(req.body);
  var signature = req.headers['x-azure-devops-signature'];

  if (!signature) {
    return res.status(401).json({ error: 'Missing signature' });
  }

  var hmac = crypto.createHmac('sha1', WEBHOOK_SECRET);
  hmac.update(payload);
  var expected = 'sha1=' + hmac.digest('hex');

  if (!crypto.timingSafeEqual(Buffer.from(signature), Buffer.from(expected))) {
    return res.status(401).json({ error: 'Invalid signature' });
  }

  next();
}

// Send data to Datadog API
function sendToDatadog(path, payload, callback) {
  var data = JSON.stringify(payload);
  var options = {
    hostname: 'api.' + DD_SITE,
    path: path,
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'DD-API-KEY': DD_API_KEY,
      'DD-APPLICATION-KEY': DD_APP_KEY,
      'Content-Length': Buffer.byteLength(data)
    }
  };

  var req = https.request(options, function(res) {
    var body = '';
    res.on('data', function(chunk) { body += chunk; });
    res.on('end', function() {
      var statusCode = res.statusCode;
      if (statusCode >= 200 && statusCode < 300) {
        callback(null, JSON.parse(body));
      } else {
        callback(new Error('Datadog API returned ' + statusCode + ': ' + body));
      }
    });
  });

  req.on('error', function(err) { callback(err); });
  req.write(data);
  req.end();
}

// Send deployment event
function sendDeploymentEvent(buildData, callback) {
  var event = {
    title: 'Deployment: ' + buildData.project + '/' + buildData.definition,
    text: 'Build #' + buildData.buildNumber + ' completed with status: ' + buildData.status +
          '\nBranch: ' + buildData.branch +
          '\nCommit: ' + buildData.commitId +
          '\nRequested by: ' + buildData.requestedBy,
    tags: [
      'service:' + buildData.definition.toLowerCase().replace(/\s+/g, '-'),
      'project:' + buildData.project.toLowerCase().replace(/\s+/g, '-'),
      'environment:' + (buildData.environment || 'unknown'),
      'status:' + buildData.status,
      'branch:' + buildData.branch,
      'commit:' + buildData.commitId
    ],
    alert_type: buildData.status === 'succeeded' ? 'info' : 'error',
    source_type_name: 'azure_devops'
  };

  sendToDatadog('/api/v1/events', event, callback);
}

// Send custom metrics
function sendPipelineMetrics(buildData, callback) {
  var timestamp = Math.floor(Date.now() / 1000);
  var baseTags = [
    'service:' + buildData.definition.toLowerCase().replace(/\s+/g, '-'),
    'project:' + buildData.project,
    'branch:' + buildData.branch
  ];

  var series = {
    series: [
      {
        metric: 'ci.pipeline.duration',
        type: 0,
        points: [{ timestamp: timestamp, value: buildData.duration || 0 }],
        tags: baseTags
      },
      {
        metric: 'ci.pipeline.completed',
        type: 0,
        points: [{ timestamp: timestamp, value: 1 }],
        tags: baseTags.concat(['status:' + buildData.status])
      }
    ]
  };

  if (buildData.status === 'failed') {
    series.series.push({
      metric: 'ci.pipeline.failures',
      type: 0,
      points: [{ timestamp: timestamp, value: 1 }],
      tags: baseTags
    });
  }

  sendToDatadog('/api/v2/series', series, callback);
}

// Handle Azure DevOps build completed webhook
app.post('/webhooks/build-completed', verifyWebhookSignature, function(req, res) {
  var resource = req.body.resource || {};
  var buildData = {
    buildNumber: resource.buildNumber || 'unknown',
    status: (resource.result || 'unknown').toLowerCase(),
    definition: resource.definition ? resource.definition.name : 'unknown',
    project: resource.project ? resource.project.name : 'unknown',
    branch: resource.sourceBranch || 'unknown',
    commitId: resource.sourceVersion || 'unknown',
    requestedBy: resource.requestedFor ? resource.requestedFor.displayName : 'unknown',
    startTime: resource.startTime,
    finishTime: resource.finishTime,
    duration: 0,
    environment: req.query.environment || 'production'
  };

  // Calculate duration in seconds
  if (buildData.startTime && buildData.finishTime) {
    var start = new Date(buildData.startTime).getTime();
    var finish = new Date(buildData.finishTime).getTime();
    buildData.duration = Math.round((finish - start) / 1000);
  }

  console.log('Build completed:', buildData.definition, '#' + buildData.buildNumber,
    '-', buildData.status, '(' + buildData.duration + 's)');

  var completed = 0;
  var errors = [];

  function checkDone() {
    completed++;
    if (completed === 2) {
      if (errors.length > 0) {
        console.error('Errors sending to Datadog:', errors);
        res.status(207).json({ status: 'partial', errors: errors });
      } else {
        res.json({ status: 'ok' });
      }
    }
  }

  sendDeploymentEvent(buildData, function(err) {
    if (err) errors.push('event: ' + err.message);
    checkDone();
  });

  sendPipelineMetrics(buildData, function(err) {
    if (err) errors.push('metrics: ' + err.message);
    checkDone();
  });
});

// Handle release deployment completed webhook
app.post('/webhooks/release-completed', verifyWebhookSignature, function(req, res) {
  var resource = req.body.resource || {};
  var environment = resource.environment || {};

  var releaseData = {
    releaseName: resource.release ? resource.release.name : 'unknown',
    environmentName: environment.name || 'unknown',
    status: (environment.status || 'unknown').toLowerCase(),
    project: resource.project ? resource.project.name : 'unknown',
    definition: resource.release ? resource.release.releaseDefinition.name : 'unknown'
  };

  console.log('Release deployed:', releaseData.releaseName, 'to', releaseData.environmentName);

  var event = {
    title: 'Release Deployed: ' + releaseData.releaseName,
    text: 'Environment: ' + releaseData.environmentName + '\nStatus: ' + releaseData.status,
    tags: [
      'service:' + releaseData.definition.toLowerCase().replace(/\s+/g, '-'),
      'environment:' + releaseData.environmentName.toLowerCase(),
      'release:' + releaseData.releaseName,
      'status:' + releaseData.status
    ],
    alert_type: releaseData.status === 'succeeded' ? 'success' : 'error',
    source_type_name: 'azure_devops'
  };

  sendToDatadog('/api/v1/events', event, function(err) {
    if (err) {
      console.error('Failed to send release event:', err.message);
      return res.status(500).json({ error: err.message });
    }
    res.json({ status: 'ok' });
  });
});

// Health check
app.get('/health', function(req, res) {
  res.json({
    status: 'ok',
    service: 'datadog-sync-service',
    uptime: process.uptime()
  });
});

var port = process.env.PORT || 3001;
app.listen(port, function() {
  console.log('Datadog sync service running on port ' + port);
});

Complete Working Example

Here is the full Azure Pipeline YAML that ties together CI Visibility, deployment tracking, source map uploads, synthetic tests, and the webhook integration:

trigger:
  - main

pool:
  vmImage: 'ubuntu-latest'

variables:
  DD_SITE: 'datadoghq.com'
  SERVICE_NAME: 'api-gateway'

stages:
  - stage: Build
    displayName: 'Build & Test'
    jobs:
      - job: BuildAndTest
        steps:
          - task: DatadogCIVisibilityStart@1
            displayName: 'Start Datadog CI Visibility'
            inputs:
              datadogServiceConnection: 'datadog-ci-visibility'
              tags: |
                team:platform
                service:$(SERVICE_NAME)

          - task: NodeTool@0
            inputs:
              versionSpec: '18.x'

          - script: npm ci
            displayName: 'Install Dependencies'

          - script: npm test -- --reporter json --outputFile test-results.json
            displayName: 'Run Tests'

          - script: npm run build
            displayName: 'Build Application'

          - script: |
              npm install -g @datadog/datadog-ci
              datadog-ci sourcemaps upload ./dist \
                --service=$(SERVICE_NAME) \
                --release-version=$(Build.BuildNumber) \
                --minified-path-prefix=https://cdn.example.com/assets/
            displayName: 'Upload Source Maps'
            env:
              DATADOG_API_KEY: $(DD_API_KEY)

          - task: DatadogCIVisibilityEnd@1
            displayName: 'End Datadog CI Visibility'

  - stage: Deploy
    displayName: 'Deploy to Production'
    dependsOn: Build
    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
    jobs:
      - deployment: DeployProduction
        environment: 'production'
        strategy:
          runOnce:
            deploy:
              steps:
                - script: |
                    echo "Deploying version $(Build.BuildNumber)..."
                    # Your deployment commands here
                  displayName: 'Deploy Application'

                - script: |
                    curl -s -X POST "https://api.datadoghq.com/api/v1/events" \
                      -H "Content-Type: application/json" \
                      -H "DD-API-KEY: $(DD_API_KEY)" \
                      -d '{
                        "title": "Deployment: $(SERVICE_NAME) v$(Build.BuildNumber)",
                        "text": "Deployed to production from $(Build.SourceBranch)\nCommit: $(Build.SourceVersion)\nBuild: $(Build.BuildNumber)",
                        "tags": [
                          "service:$(SERVICE_NAME)",
                          "environment:production",
                          "version:$(Build.BuildNumber)",
                          "commit:$(Build.SourceVersion)"
                        ],
                        "alert_type": "info",
                        "source_type_name": "azure_devops"
                      }'
                  displayName: 'Send Deployment Event'

  - stage: Verify
    displayName: 'Post-Deploy Verification'
    dependsOn: Deploy
    jobs:
      - job: SyntheticTests
        steps:
          - script: |
              npm install -g @datadog/datadog-ci
              datadog-ci synthetics run-tests \
                --apiKey $(DD_API_KEY) \
                --appKey $(DD_APP_KEY) \
                --search "tag:service:$(SERVICE_NAME) tag:env:production" \
                --failOnCriticalErrors
            displayName: 'Run Synthetic Tests'

          - script: |
              echo "Deployment verified successfully"
            displayName: 'Verification Complete'
            condition: succeeded()

Set up the webhook in Azure DevOps under Project Settings > Service hooks > Web Hooks. Point build completion events at your sync service's /webhooks/build-completed endpoint and release deployment events at /webhooks/release-completed.

Common Issues and Troubleshooting

CI Visibility Traces Not Appearing

This is almost always an API key issue. Verify your service connection is using a valid Datadog API key (not an Application key). API keys start with a hex string; application keys are longer. Also confirm the Datadog site is set correctly — datadoghq.com for US, datadoghq.eu for EU, us5.datadoghq.com for US5. If you are behind a corporate proxy, you may need to set HTTPS_PROXY in your pipeline variables.

Deployment Events Missing Tags

Tags must be lowercase and follow the key:value format. Spaces in tag values are replaced with underscores by Datadog. If your pipeline variable contains special characters, they will be stripped. Validate your tags by checking the event in Datadog's Event Explorer. A common mistake is using $(Build.SourceBranch) which includes the refs/heads/ prefix — strip it with:

- script: |
    CLEAN_BRANCH=$(echo "$(Build.SourceBranch)" | sed 's|refs/heads/||')
    echo "##vso[task.setvariable variable=CLEAN_BRANCH]$CLEAN_BRANCH"

Source Map Upload Failures

Source map uploads fail silently if the --minified-path-prefix does not match the URLs your application serves assets from. If your CDN serves from https://cdn.example.com/v2/assets/ but you set the prefix to https://cdn.example.com/assets/, nothing will match. Always check with datadog-ci sourcemaps upload --dry-run first. Also ensure your build produces .map files alongside the minified bundles — some Webpack configs strip them in production by default.

Webhook Signature Verification Failing

Azure DevOps webhook signatures use HMAC-SHA1. If your verification is failing on every request, check that you are hashing the raw request body, not a parsed and re-serialized version. Express's express.json() middleware parses the body before your handler sees it, so JSON.stringify(req.body) may produce a different byte sequence than the original payload. To fix this, capture the raw body:

app.use(express.json({
  verify: function(req, res, buf) {
    req.rawBody = buf;
  }
}));

Then hash req.rawBody instead of JSON.stringify(req.body).

APM Version Mismatch

If deployment events do not correlate with APM traces, the version tag must be identical in both places. The pipeline sends version:1.2.3 but your dd-trace init sets version: 'v1.2.3' — the v prefix breaks the correlation. Pick one format and enforce it everywhere.

Custom Metrics Not Graphable

If your custom metrics show up in Datadog but cannot be graphed, check the metric type. Type 0 is count, type 1 is rate, type 2 is gauge. Pipeline duration should be a gauge (type 2), not a count. Using the wrong type causes aggregation issues where values are summed instead of averaged.

Best Practices

  • Use a dedicated Datadog API key per pipeline or team. This lets you rotate keys without breaking every pipeline in your organization and makes it easy to audit which pipelines are sending what.

  • Tag everything with service, team, and environment. These three tags are the foundation of all useful queries. Without them, your CI data becomes an unsearchable mess.

  • Send deployment events before running post-deploy verification, not after. You want the deployment marker on your dashboards to appear at the actual deployment time, not 5 minutes later after synthetic tests finish.

  • Store Datadog API keys in Azure DevOps variable groups, not inline. Mark them as secret variables. Never log them or include them in pipeline output. Use variable groups shared across pipelines to avoid key sprawl.

  • Set up a pipeline failure monitor on day one. The first monitor you create should alert on any pipeline failure rate above your threshold. Everything else is optimization — this one is survival.

  • Use datadog-ci CLI instead of raw curl calls when possible. The CLI handles retries, validation, and error reporting. Raw curl gives you flexibility but hides failures behind 2>&1 > /dev/null patterns that people inevitably add to "fix" noisy pipelines.

  • Separate CI metrics from application metrics using distinct service names. Name pipeline services with a -pipeline suffix (e.g., api-gateway-pipeline) to avoid polluting your APM service catalog with CI data.

  • Run synthetic tests in a separate stage with failure handling. If synthetic tests fail, you want the pipeline to report the failure but not necessarily roll back automatically. Use conditions and gates to control the blast radius.

  • Upload source maps in the build stage, not the deploy stage. Source maps are tied to a build artifact version. Upload them once during build and they will be available for any environment that version is deployed to.

  • Review your CI Visibility data weekly. Pipeline performance degrades slowly — a test that takes 2 seconds longer each week is invisible day-to-day but adds 2 minutes over a quarter. Weekly reviews catch these trends.

References

Powered by Contentful