Test Plans

Load Testing with Azure Load Testing Service

Performance test Node.js APIs with Azure Load Testing using JMeter scripts, pipeline integration, and server-side monitoring

Load Testing with Azure Load Testing Service

Overview

Azure Load Testing is a fully managed service that generates high-scale load against your applications without the overhead of provisioning and maintaining your own load generation infrastructure. It uses Apache JMeter under the hood, which means you get the full power of JMeter scripting while Azure handles the distributed execution, metric collection, and result aggregation. For Node.js teams shipping APIs on Azure, this is the most practical way to catch performance regressions before they hit production.

Prerequisites

Before you start, make sure you have the following in place:

  • An Azure subscription with at least Contributor access to a resource group
  • Azure CLI installed and authenticated (az login)
  • A Node.js Express API deployed to Azure App Service, AKS, or any publicly reachable endpoint
  • Apache JMeter 5.5+ installed locally for script authoring (optional but recommended)
  • An Azure DevOps project with a pipeline configured for your repository
  • Basic familiarity with JMeter test plans (thread groups, samplers, listeners)

Azure Load Testing Service Overview

Azure Load Testing sits in the Azure portal under the "Load Testing" resource type. When you create a load testing resource, you get a workspace where you can define tests, upload JMeter scripts, configure engine instances, and review results. The service spins up distributed JMeter engines across Azure regions, runs your test plan, and collects both client-side metrics (response time, throughput, error rate) and server-side metrics (CPU, memory, connections) from your Azure-hosted application.

The architecture is straightforward. You define a test configuration that references a JMeter .jmx file. Azure provisions one or more test engine instances, each running a JMeter worker. The engines execute your test plan in parallel, and the results are aggregated into a single dashboard. If your application runs on Azure App Service, AKS, or Azure Functions, you can link the app component to get server-side metrics correlated with the load test timeline.

There are two ways to create tests: upload a custom JMeter script (full control) or use URL-based quick tests (zero scripting). For serious Node.js API testing, you will almost always want custom JMeter scripts, but the URL-based approach is useful for quick smoke tests.

Creating Load Tests from JMeter Scripts

The foundation of Azure Load Testing is the JMeter test plan. Here is a basic JMeter script that tests a Node.js API with multiple endpoints:

<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="5.0" jmeter="5.5">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Node API Load Test">
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments">
        <collectionProp name="Arguments.arguments">
          <elementProp name="BASE_URL" elementType="Argument">
            <stringProp name="Argument.name">BASE_URL</stringProp>
            <stringProp name="Argument.value">${__ENV(BASE_URL)}</stringProp>
          </elementProp>
        </collectionProp>
      </elementProp>
    </TestPlan>
    <hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="API Users">
        <intProp name="ThreadGroup.num_threads">50</intProp>
        <intProp name="ThreadGroup.ramp_time">60</intProp>
        <longProp name="ThreadGroup.duration">300</longProp>
        <boolProp name="ThreadGroup.scheduler">true</boolProp>
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
      </ThreadGroup>
      <hashTree>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="GET /api/articles">
          <stringProp name="HTTPSampler.domain">${BASE_URL}</stringProp>
          <stringProp name="HTTPSampler.port">443</stringProp>
          <stringProp name="HTTPSampler.protocol">https</stringProp>
          <stringProp name="HTTPSampler.path">/api/articles</stringProp>
          <stringProp name="HTTPSampler.method">GET</stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="GET /api/articles/{id}">
          <stringProp name="HTTPSampler.domain">${BASE_URL}</stringProp>
          <stringProp name="HTTPSampler.port">443</stringProp>
          <stringProp name="HTTPSampler.protocol">https</stringProp>
          <stringProp name="HTTPSampler.path">/api/articles/${articleId}</stringProp>
          <stringProp name="HTTPSampler.method">GET</stringProp>
        </HTTPSamplerProxy>
        <hashTree/>
        <ConstantTimer guiclass="ConstantTimerGui" testclass="ConstantTimer" testname="Think Time">
          <stringProp name="ConstantTimer.delay">1000</stringProp>
        </ConstantTimer>
        <hashTree/>
      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>

Save this as load-test.jmx. To upload it to Azure Load Testing, you can use the Azure CLI:

az load test create \
  --load-test-resource my-load-test-resource \
  --test-id node-api-test \
  --test-plan load-test.jmx \
  --resource-group my-rg \
  --engine-instances 2 \
  --env BASE_URL=myapp.azurewebsites.net

The --engine-instances flag controls how many parallel JMeter workers run your test. Each engine instance executes the full thread group, so 2 engines with 50 threads each means 100 concurrent virtual users.

URL-Based Quick Tests

For quick validation, Azure Load Testing offers URL-based tests that require no JMeter scripting at all. You specify one or more URLs, the number of virtual users, test duration, and ramp-up period. The service generates a JMeter script behind the scenes.

This is useful when you want to quickly verify that a deployment did not introduce a performance regression. You can create a URL-based test from the portal or the CLI:

az load test create \
  --load-test-resource my-load-test-resource \
  --test-id quick-smoke-test \
  --resource-group my-rg \
  --engine-instances 1 \
  --test-type URL \
  --url "https://myapp.azurewebsites.net/api/health" \
  --virtual-users 20 \
  --ramp-up-time 10 \
  --test-duration 60

URL-based tests are limited compared to custom JMeter scripts. You cannot add request bodies, custom headers, data-driven parameters, or complex user flows. Think of them as a sanity check, not a replacement for proper load test scripts.

Configuring Virtual Users and Ramp-Up

Getting the virtual user count and ramp-up period right is critical. A common mistake is slamming an API with full load immediately, which does not reflect real-world traffic patterns and can mask issues that only appear under gradual load increase.

The formula is straightforward:

Total virtual users = threads per engine x number of engine instances

If your JMeter script defines 100 threads and you provision 4 engine instances, you get 400 concurrent virtual users. The ramp-up period defines how long it takes to reach full load. A 120-second ramp-up with 100 threads means JMeter starts roughly one new virtual user every 1.2 seconds.

For Node.js APIs, I recommend starting with these baselines:

  • Smoke test: 10 virtual users, 60 seconds, 1 engine
  • Load test: 100-500 virtual users, 300 seconds, 2-5 engines
  • Stress test: 1000+ virtual users, 600 seconds, 10+ engines
  • Soak test: 200 virtual users, 3600+ seconds, 2 engines

Node.js is single-threaded by default. If your application runs on a single App Service instance without clustering, you will hit the event loop ceiling much earlier than a multi-threaded runtime. Start conservative and scale up.

Test Criteria and Pass/Fail Rules

Azure Load Testing lets you define pass/fail criteria that automatically determine whether a test run succeeds or fails. This is essential for CI/CD integration because you want your pipeline to break when performance degrades.

You configure criteria in the test definition:

testId: node-api-test
testPlan: load-test.jmx
engineInstances: 2
failureCriteria:
  - avg(response_time_ms) > 500
  - percentage(error) > 5
  - p99(response_time_ms) > 2000
  - avg(requests_per_sec) < 100

The available metrics include:

Metric Description
response_time_ms Response time in milliseconds
latency Time to first byte
error Error percentage
requests_per_sec Throughput

You can apply aggregate functions: avg(), p50(), p90(), p95(), p99(), min(), max(), and percentage(). These criteria are evaluated against the entire test run, and if any criterion is violated, the test is marked as failed.

For a typical Node.js API, I use these thresholds as a starting point:

  • Average response time under 500ms
  • P99 response time under 2 seconds
  • Error rate under 1%
  • Throughput above the expected baseline (varies by application)

Integrating with Azure Pipelines

This is where Azure Load Testing becomes genuinely powerful. You can run load tests as part of your CI/CD pipeline and gate deployments on performance criteria.

The AzureLoadTest@1 task is available in Azure DevOps. Here is a pipeline definition that runs a load test after deployment:

trigger:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

stages:
  - stage: Deploy
    jobs:
      - job: DeployApp
        steps:
          - task: AzureWebApp@1
            inputs:
              appType: 'webAppLinux'
              appName: 'my-node-api'
              package: '$(System.DefaultWorkingDirectory)/dist'

  - stage: LoadTest
    dependsOn: Deploy
    jobs:
      - job: RunLoadTest
        steps:
          - task: AzureLoadTest@1
            inputs:
              azureSubscription: 'my-azure-connection'
              loadTestConfigFile: 'tests/load/config.yaml'
              loadTestResource: 'my-load-test-resource'
              resourceGroup: 'my-rg'
              env: |
                [
                  {
                    "name": "BASE_URL",
                    "value": "my-node-api.azurewebsites.net"
                  },
                  {
                    "name": "API_KEY",
                    "value": "$(API_KEY)"
                  }
                ]
          - publish: $(System.DefaultWorkingDirectory)/loadTest
            artifact: loadTestResults

The config.yaml file referenced above defines the test configuration:

version: v0.1
testId: node-api-regression
testPlan: load-test.jmx
engineInstances: 2
configurationFiles:
  - users.csv
failureCriteria:
  - avg(response_time_ms) > 500
  - percentage(error) > 5
  - p99(response_time_ms) > 2000
autoStop:
  errorPercentage: 80
  timeWindow: 60

The autoStop configuration is important. It kills the test early if the error rate exceeds 80% over a 60-second window, which saves you from burning engine minutes on a clearly broken deployment.

Monitoring Server-Side Metrics

When your application runs on Azure infrastructure, you can link it as an app component in your load test. This gives you server-side metrics correlated with the load test timeline, including CPU percentage, memory working set, HTTP queue length, and active connections.

To add server-side monitoring via the CLI:

az load test server-metric add \
  --load-test-resource my-load-test-resource \
  --test-id node-api-test \
  --metric-id cpu-metric \
  --metric-name "CpuPercentage" \
  --metric-namespace "Microsoft.Web/sites" \
  --aggregation Average \
  --app-component-type "Microsoft.Web/sites" \
  --app-component-id "/subscriptions/{sub}/resourceGroups/{rg}/providers/Microsoft.Web/sites/my-node-api"

For Node.js applications on App Service, the most telling metrics are:

  • CPU Percentage: Node.js is single-threaded, so high CPU on a single instance means you are hitting the event loop ceiling. If CPU exceeds 70% during load, consider scaling out or optimizing hot code paths.
  • Memory Working Set: Watch for memory growth over the test duration. A steady climb often indicates a memory leak in request handlers or middleware.
  • Http Queue Length: Non-zero queue length means requests are waiting for available connections. This is a strong signal that you need more instances.
  • Active Connections: Compare against your connection pool limits (database, Redis, external services).

Custom JMeter Scripts for Node.js APIs

Real-world Node.js APIs require more than simple GET requests. Here is a JMeter script pattern for testing an authentication flow followed by authenticated API calls:

<ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Authenticated Users">
  <intProp name="ThreadGroup.num_threads">100</intProp>
  <intProp name="ThreadGroup.ramp_time">120</intProp>
  <longProp name="ThreadGroup.duration">300</longProp>
  <boolProp name="ThreadGroup.scheduler">true</boolProp>
</ThreadGroup>
<hashTree>
  <!-- Login and extract token -->
  <HTTPSamplerProxy testname="POST /api/auth/login">
    <stringProp name="HTTPSampler.domain">${BASE_URL}</stringProp>
    <stringProp name="HTTPSampler.protocol">https</stringProp>
    <stringProp name="HTTPSampler.path">/api/auth/login</stringProp>
    <stringProp name="HTTPSampler.method">POST</stringProp>
    <boolProp name="HTTPSampler.postBodyRaw">true</boolProp>
    <elementProp name="HTTPsampler.Arguments" elementType="Arguments">
      <collectionProp name="Arguments.arguments">
        <elementProp elementType="HTTPArgument">
          <stringProp name="Argument.value">{"email":"${email}","password":"${password}"}</stringProp>
        </elementProp>
      </collectionProp>
    </elementProp>
  </HTTPSamplerProxy>
  <hashTree>
    <HeaderManager testname="JSON Headers">
      <collectionProp name="HeaderManager.headers">
        <elementProp elementType="Header">
          <stringProp name="Header.name">Content-Type</stringProp>
          <stringProp name="Header.value">application/json</stringProp>
        </elementProp>
      </collectionProp>
    </HeaderManager>
    <hashTree/>
    <JSONPostProcessor testname="Extract Token">
      <stringProp name="JSONPostProcessor.referenceNames">authToken</stringProp>
      <stringProp name="JSONPostProcessor.jsonPathExprs">$.token</stringProp>
    </JSONPostProcessor>
    <hashTree/>
  </hashTree>

  <!-- Authenticated API calls -->
  <HTTPSamplerProxy testname="GET /api/profile">
    <stringProp name="HTTPSampler.domain">${BASE_URL}</stringProp>
    <stringProp name="HTTPSampler.protocol">https</stringProp>
    <stringProp name="HTTPSampler.path">/api/profile</stringProp>
    <stringProp name="HTTPSampler.method">GET</stringProp>
  </HTTPSamplerProxy>
  <hashTree>
    <HeaderManager testname="Auth Headers">
      <collectionProp name="HeaderManager.headers">
        <elementProp elementType="Header">
          <stringProp name="Header.name">Authorization</stringProp>
          <stringProp name="Header.value">Bearer ${authToken}</stringProp>
        </elementProp>
        <elementProp elementType="Header">
          <stringProp name="Header.name">Content-Type</stringProp>
          <stringProp name="Header.value">application/json</stringProp>
        </elementProp>
      </collectionProp>
    </HeaderManager>
    <hashTree/>
  </hashTree>
</hashTree>

This pattern extracts a JWT from the login response and uses it in subsequent requests. For Node.js APIs using Express and JWT authentication, this simulates realistic user behavior.

Parameterization and CSV Data Feeds

Hardcoding a single user credential in your load test is not realistic. You need multiple test users, different request payloads, and varied query parameters. JMeter's CSV Data Set Config handles this.

Create a users.csv file:

email,password,userId
[email protected],TestPass123,usr_001
[email protected],TestPass123,usr_002
[email protected],TestPass123,usr_003
[email protected],TestPass123,usr_004
[email protected],TestPass123,usr_005

Reference it in your JMeter script:

<CSVDataSet guiclass="TestBeanGUI" testclass="CSVDataSet" testname="User Data">
  <stringProp name="filename">users.csv</stringProp>
  <stringProp name="variableNames">email,password,userId</stringProp>
  <stringProp name="delimiter">,</stringProp>
  <boolProp name="recycle">true</boolProp>
  <boolProp name="stopThread">false</boolProp>
  <stringProp name="shareMode">shareMode.all</stringProp>
</CSVDataSet>

When uploading to Azure Load Testing, include the CSV as a configuration file:

az load test file upload \
  --load-test-resource my-load-test-resource \
  --test-id node-api-test \
  --path users.csv \
  --file-type ADDITIONAL_ARTIFACTS

One thing to watch out for: when running with multiple engine instances, each engine gets a copy of the CSV file. If shareMode is set to shareMode.all, each engine iterates through the entire file independently. This means you might have multiple engines using the same user credentials simultaneously, which can cause issues if your API enforces single-session constraints.

For Node.js APIs that use session-based authentication or have per-user rate limits, split your CSV files so each engine gets a unique subset of users. You can do this by naming files users_1.csv, users_2.csv, etc., and using the ${__threadNum} function to pick the right file.

Distributed Load Generation

A single Azure Load Testing engine can typically generate around 250 concurrent virtual users (depending on the complexity of your JMeter script). For higher loads, you scale out by increasing the engine instance count.

The engines are distributed across the Azure region where your load testing resource is located. If you need to test from multiple geographic regions to simulate global traffic, you will need to create separate load testing resources in different regions and run them concurrently.

Key considerations for distributed load:

  • Correlation: Each engine runs independently. Avoid JMeter elements that require cross-engine coordination, like the "Critical Section Controller."
  • Shared state: There is no shared state between engines. If your test logic depends on unique data per virtual user, use the CSV splitting approach mentioned above.
  • Aggregation: Results are automatically aggregated across all engines. The dashboard shows combined throughput, response times, and error rates.

For a Node.js API running on Azure App Service with autoscaling, I recommend this approach:

Start with 2 engine instances (100-200 virtual users)
Run for 5 minutes
Check if App Service scaled out
Increase to 4 engines (200-400 virtual users)
Run for 10 minutes
Continue scaling engines until you find the breaking point

This iterative approach helps you understand your application's scaling behavior and identify the point where autoscaling cannot keep up with incoming load.

Analyzing Test Results

After a test run completes, Azure Load Testing provides a dashboard with client-side and server-side metrics. The key metrics to examine:

Client-Side Metrics:

  • Requests/sec (throughput): Is it stable or declining over time? A declining throughput curve under constant load means your application is degrading.
  • Response time (p50, p90, p95, p99): The p99 is where problems hide. An acceptable p50 of 100ms can mask a p99 of 5 seconds.
  • Error percentage: Any errors above 0% need investigation. Common Node.js errors under load include ECONNRESET (connection dropped), ETIMEDOUT (upstream timeout), and 503 (server overloaded).

Server-Side Metrics (if linked):

  • CPU vs. throughput: Plot these together. If CPU plateaus while throughput drops, you likely have an I/O bottleneck (database, external API, file system).
  • Memory trend: A sawtooth pattern is normal (garbage collection). A steady upward trend is a leak.
  • HTTP queue length: Should stay at zero or near-zero. Sustained non-zero values mean you need more instances.

You can download raw results as a CSV for deeper analysis. Here is a quick Node.js script to parse the results:

var fs = require("fs");
var readline = require("readline");

function analyzeResults(csvPath) {
  var results = [];
  var rl = readline.createInterface({
    input: fs.createReadStream(csvPath),
    crlfDelay: Infinity
  });

  rl.on("line", function(line) {
    var parts = line.split(",");
    if (parts[0] === "timeStamp") return; // skip header

    results.push({
      timestamp: parseInt(parts[0]),
      elapsed: parseInt(parts[1]),
      label: parts[2],
      responseCode: parts[3],
      success: parts[7] === "true",
      bytes: parseInt(parts[9]),
      latency: parseInt(parts[14])
    });
  });

  rl.on("close", function() {
    var totalRequests = results.length;
    var errors = results.filter(function(r) { return !r.success; }).length;
    var responseTimes = results.map(function(r) { return r.elapsed; }).sort(function(a, b) { return a - b; });

    var p50 = responseTimes[Math.floor(totalRequests * 0.50)];
    var p90 = responseTimes[Math.floor(totalRequests * 0.90)];
    var p95 = responseTimes[Math.floor(totalRequests * 0.95)];
    var p99 = responseTimes[Math.floor(totalRequests * 0.99)];

    console.log("Total requests:", totalRequests);
    console.log("Error rate:", ((errors / totalRequests) * 100).toFixed(2) + "%");
    console.log("P50:", p50 + "ms");
    console.log("P90:", p90 + "ms");
    console.log("P95:", p95 + "ms");
    console.log("P99:", p99 + "ms");

    // Group by endpoint
    var byLabel = {};
    results.forEach(function(r) {
      if (!byLabel[r.label]) {
        byLabel[r.label] = { count: 0, errors: 0, totalTime: 0 };
      }
      byLabel[r.label].count++;
      byLabel[r.label].totalTime += r.elapsed;
      if (!r.success) byLabel[r.label].errors++;
    });

    console.log("\nPer-endpoint breakdown:");
    Object.keys(byLabel).forEach(function(label) {
      var stats = byLabel[label];
      var avgTime = (stats.totalTime / stats.count).toFixed(0);
      var errRate = ((stats.errors / stats.count) * 100).toFixed(2);
      console.log("  " + label + ": " + stats.count + " requests, avg " + avgTime + "ms, " + errRate + "% errors");
    });
  });
}

analyzeResults(process.argv[2]);

Run it against your downloaded results:

node analyze-results.js test-results.csv

Comparing Test Runs

One of the most valuable features in Azure Load Testing is the ability to compare test runs side by side. After you have a baseline run, every subsequent run can be compared to identify regressions or improvements.

In the Azure portal, select two or more test runs and click "Compare." The comparison view shows:

  • Response time trends overlaid on the same chart
  • Throughput deltas
  • Error rate changes
  • Server-side metric comparisons

For pipeline integration, you can programmatically compare results by downloading the summary statistics from each run:

az load test-run metrics list \
  --load-test-resource my-load-test-resource \
  --test-run-id run-20260213-001 \
  --metric-name "VirtualUsers,ResponseTime,ErrorPercentage,RequestsPerSecond" \
  --resource-group my-rg

Store baseline metrics in your repository and compare them in your pipeline. If the average response time increases by more than 20% compared to the baseline, fail the pipeline.

Cost Management for Load Testing

Azure Load Testing charges per virtual user hour (VUH). One VUH equals one virtual user running for one hour. The pricing is approximately $10 per 50 VUH at the time of writing, but check the current Azure pricing page for exact figures.

Here is how to estimate costs:

VUH = (virtual_users × test_duration_minutes) / 60

Example: 200 virtual users × 10 minutes = 2000 VU-minutes = 33.3 VUH

Cost management strategies:

  1. Use the autoStop feature: Kill tests early when error rates spike. No point burning VUH on a broken deployment.
  2. Run short tests in CI/CD: 5-minute tests with moderate load are sufficient for regression detection. Save long soak tests for scheduled nightly runs.
  3. Right-size engine instances: Each engine has capacity limits. Do not provision 10 engines if 2 will generate your target load.
  4. Use free tier first: Azure Load Testing includes a free monthly allowance. Use it for development and save paid VUH for production validation.
  5. Schedule expensive tests: Run full-scale load tests weekly or before major releases, not on every commit.

Complete Working Example

Here is a complete load testing setup for a Node.js Express API. This includes the application code, JMeter test plan, Azure Pipeline integration, and monitoring configuration.

The Node.js API

var express = require("express");
var app = express();
var port = process.env.PORT || 3000;

app.use(express.json());

// Simulated data store
var articles = [];
for (var i = 0; i < 100; i++) {
  articles.push({
    id: "art_" + String(i).padStart(3, "0"),
    title: "Article " + (i + 1),
    content: "Content for article " + (i + 1),
    author: "Author " + (i % 5),
    publishedAt: new Date(2026, 0, i + 1).toISOString()
  });
}

// Health check
app.get("/api/health", function(req, res) {
  res.json({ status: "healthy", uptime: process.uptime() });
});

// List articles with pagination
app.get("/api/articles", function(req, res) {
  var page = parseInt(req.query.page) || 1;
  var limit = parseInt(req.query.limit) || 10;
  var start = (page - 1) * limit;
  var pageArticles = articles.slice(start, start + limit);

  res.json({
    data: pageArticles,
    total: articles.length,
    page: page,
    totalPages: Math.ceil(articles.length / limit)
  });
});

// Get single article
app.get("/api/articles/:id", function(req, res) {
  var article = articles.find(function(a) { return a.id === req.params.id; });
  if (!article) {
    return res.status(404).json({ error: "Article not found" });
  }
  res.json(article);
});

// Search articles
app.get("/api/search", function(req, res) {
  var query = (req.query.q || "").toLowerCase();
  var results = articles.filter(function(a) {
    return a.title.toLowerCase().indexOf(query) !== -1;
  });
  res.json({ data: results, total: results.length });
});

// Create article
app.post("/api/articles", function(req, res) {
  var article = {
    id: "art_" + String(articles.length).padStart(3, "0"),
    title: req.body.title,
    content: req.body.content,
    author: req.body.author,
    publishedAt: new Date().toISOString()
  };
  articles.push(article);
  res.status(201).json(article);
});

app.listen(port, function() {
  console.log("API running on port " + port);
});

JMeter Test Plan (load-test.jmx)

Save the complete JMeter script shown in the "Creating Load Tests from JMeter Scripts" section above, with the addition of POST and search samplers. For brevity, the full XML is not repeated, but your thread group should include:

  1. GET /api/health (weight: 5%)
  2. GET /api/articles (weight: 40%)
  3. GET /api/articles/:id with parameterized IDs (weight: 35%)
  4. GET /api/search?q={term} with parameterized search terms (weight: 15%)
  5. POST /api/articles with JSON body (weight: 5%)

Use a Throughput Controller or Random Controller to distribute requests according to these weights.

Load Test Configuration (config.yaml)

version: v0.1
testId: node-api-load-test
testPlan: load-test.jmx
description: "Load test for Node.js Express API"
engineInstances: 3
configurationFiles:
  - users.csv
  - search-terms.csv
failureCriteria:
  - avg(response_time_ms) > 500
  - percentage(error) > 2
  - p99(response_time_ms) > 3000
  - p95(response_time_ms) > 1500
autoStop:
  errorPercentage: 50
  timeWindow: 30

Azure Pipeline (azure-pipelines.yml)

trigger:
  branches:
    include:
      - main

pool:
  vmImage: 'ubuntu-latest'

variables:
  azureSubscription: 'my-azure-service-connection'
  appName: 'my-node-api'
  resourceGroup: 'my-rg'
  loadTestResource: 'my-load-test-resource'

stages:
  - stage: Build
    jobs:
      - job: BuildApp
        steps:
          - task: NodeTool@0
            inputs:
              versionSpec: '20.x'
          - script: npm ci
            displayName: 'Install dependencies'
          - script: npm test
            displayName: 'Run unit tests'
          - task: ArchiveFiles@2
            inputs:
              rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
              includeRootFolder: false
              archiveFile: '$(Build.ArtifactStagingDirectory)/app.zip'
          - publish: $(Build.ArtifactStagingDirectory)/app.zip
            artifact: app

  - stage: Deploy
    dependsOn: Build
    jobs:
      - deployment: DeployApp
        environment: 'staging'
        strategy:
          runOnce:
            deploy:
              steps:
                - task: AzureWebApp@1
                  inputs:
                    azureSubscription: $(azureSubscription)
                    appType: 'webAppLinux'
                    appName: $(appName)
                    package: '$(Pipeline.Workspace)/app/app.zip'

  - stage: LoadTest
    dependsOn: Deploy
    jobs:
      - job: RunLoadTest
        displayName: 'Run Load Test'
        steps:
          - task: AzureLoadTest@1
            displayName: 'Execute Load Test'
            inputs:
              azureSubscription: $(azureSubscription)
              loadTestConfigFile: 'tests/load/config.yaml'
              loadTestResource: $(loadTestResource)
              resourceGroup: $(resourceGroup)
              env: |
                [
                  {
                    "name": "BASE_URL",
                    "value": "$(appName).azurewebsites.net"
                  }
                ]
          - task: PublishPipelineArtifact@1
            condition: always()
            inputs:
              targetPath: '$(System.DefaultWorkingDirectory)/loadTest'
              artifactName: 'loadTestResults'
              publishLocation: 'pipeline'

Server-Side Monitoring Script

Use this script to set up server-side monitoring for your load test:

#!/bin/bash
RESOURCE_GROUP="my-rg"
LOAD_TEST_RESOURCE="my-load-test-resource"
TEST_ID="node-api-load-test"
APP_NAME="my-node-api"
SUBSCRIPTION_ID=$(az account show --query id -o tsv)

APP_RESOURCE_ID="/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Web/sites/$APP_NAME"

# Add CPU metric
az load test server-metric add \
  --load-test-resource $LOAD_TEST_RESOURCE \
  --test-id $TEST_ID \
  --metric-id "cpu" \
  --metric-name "CpuPercentage" \
  --metric-namespace "Microsoft.Web/sites" \
  --aggregation "Average" \
  --app-component-type "Microsoft.Web/sites" \
  --app-component-id $APP_RESOURCE_ID \
  --resource-group $RESOURCE_GROUP

# Add memory metric
az load test server-metric add \
  --load-test-resource $LOAD_TEST_RESOURCE \
  --test-id $TEST_ID \
  --metric-id "memory" \
  --metric-name "MemoryWorkingSet" \
  --metric-namespace "Microsoft.Web/sites" \
  --aggregation "Average" \
  --app-component-type "Microsoft.Web/sites" \
  --app-component-id $APP_RESOURCE_ID \
  --resource-group $RESOURCE_GROUP

# Add HTTP queue length
az load test server-metric add \
  --load-test-resource $LOAD_TEST_RESOURCE \
  --test-id $TEST_ID \
  --metric-id "http-queue" \
  --metric-name "HttpQueueLength" \
  --metric-namespace "Microsoft.Web/sites" \
  --aggregation "Average" \
  --app-component-type "Microsoft.Web/sites" \
  --app-component-id $APP_RESOURCE_ID \
  --resource-group $RESOURCE_GROUP

echo "Server-side metrics configured for test: $TEST_ID"

Common Issues and Troubleshooting

1. JMeter Script Works Locally but Fails in Azure

The most common cause is file path references. When JMeter runs locally, it resolves relative paths from your working directory. In Azure Load Testing, all files are uploaded to a flat directory. Change any file references in your JMX to use just the filename without path separators. Also check that your CSV files are uploaded as configuration files, not test plan files.

2. Connection Refused or Timeout Errors at High Load

If your Node.js API is behind an Azure Application Gateway or App Service, connection limits may be the bottleneck before your application code is. App Service has a default limit of 7,500 concurrent connections per instance. Check your App Service plan's connection limits and scale out before assuming the problem is in your Node.js code. Also verify that your JMeter script includes reasonable think time between requests. Without think time, each virtual user fires requests in a tight loop, generating unrealistic load.

3. Inconsistent Results Between Runs

This usually comes down to cold starts or external dependencies. If your Node.js application uses a database connection pool, the first run after a deployment will include pool initialization time. Run a brief warm-up test before your actual load test. Also check if your application depends on external APIs with rate limits or variable latency — these introduce noise into your results.

4. Engine Instance Errors or Provisioning Failures

Azure Load Testing provisions engine instances in the same region as your load testing resource. If the region is under capacity pressure, provisioning can fail. Use the az load test-run show command to check engine status. If engines fail to provision, try reducing the engine count or switching to a different Azure region. Also ensure your load testing resource's managed identity has the correct RBAC permissions.

5. CSV Data Not Being Read by Virtual Users

When running with multiple engines, verify that your CSV files are included in the configurationFiles list in your YAML config, not just uploaded separately. Each engine needs its own copy. Also check the shareMode setting — if it is shareMode.thread, each thread gets a unique row, but with multiple engines, different engines may read the same rows. For unique-per-user data, use shareMode.all and ensure your CSV has enough rows for all virtual users across all engines.

Best Practices

  • Establish a baseline before optimizing. Run your load test against the current production version and record the results. Without a baseline, you cannot objectively measure whether changes improve or degrade performance.

  • Test in an environment that mirrors production. Do not load test against a development App Service plan with a single B1 instance and expect the results to predict production behavior. Match the SKU, instance count, database tier, and network configuration.

  • Include think time in your scripts. Real users pause between actions. A 1-3 second constant timer or gaussian random timer between requests produces more realistic load patterns than a tight loop. Without think time, your test measures maximum throughput rather than realistic concurrent user behavior.

  • Version control your JMeter scripts and test configurations. Treat load test artifacts the same as application code. Store .jmx files, CSV data, and YAML configs in your repository so changes are tracked and reviewable.

  • Use environment variables for endpoint URLs and secrets. Never hardcode hostnames or API keys in your JMeter scripts. Use JMeter's ${__ENV(VAR_NAME)} function and pass values through the Azure Load Testing configuration. This makes the same script reusable across environments.

  • Monitor your application's dependencies, not just the application. A slow database query or an overwhelmed Redis cache will show up as high API response times. Link your database, cache, and messaging services as monitored components alongside your application.

  • Run load tests on a schedule, not just in CI/CD. A 5-minute test in your pipeline catches regressions, but a 30-minute soak test running nightly catches memory leaks and connection pool exhaustion that short tests miss.

  • Set conservative pass/fail criteria initially and tighten over time. Start with generous thresholds (e.g., p99 under 5 seconds) to avoid false positives, then ratchet them down as you optimize. Overly strict criteria from day one will lead to your team ignoring failures.

  • Clean up test data after load tests. If your load test creates records in a database, have a cleanup step that removes test data. Accumulated test data skews future results and can affect production if you are testing against a shared environment.

References

Powered by Contentful