Test Plans

Test Plan Templates and Configurations

A practical guide to creating reusable test plan templates and managing test configurations in Azure DevOps, covering plan structure, configuration variables, configuration matrices, clone operations, cross-browser testing setups, and REST API automation for template management.

Test Plan Templates and Configurations

Overview

Every sprint, someone creates a new test plan, builds out the same suite hierarchy, assigns the same configurations, and hopes they remembered every regression suite from last sprint. This manual process wastes time and introduces inconsistencies -- one sprint's test plan has a smoke test suite, the next one does not, and nobody notices until release day. Azure DevOps Test Plans supports cloning entire test plans including suites, test cases, and configurations, and with some REST API automation you can build a reusable template system that generates consistent test plans for every sprint.

Configurations are the other half of the equation. When you need to run the same test cases across Chrome, Firefox, Edge, and Safari on both Windows and macOS, configurations create that matrix without duplicating a single test case. Each combination becomes a separate test point with its own pass/fail tracking. I have seen teams go from ad-hoc browser testing ("can someone check if this works in Firefox?") to systematic cross-browser validation with per-configuration trend data -- all through proper configuration setup.

Prerequisites

  • An Azure DevOps organization with Azure Test Plans enabled
  • Basic + Test Plans access level for creating and managing test plans
  • At least one completed test plan to use as a template base
  • Node.js 18+ for the automation scripts
  • A Personal Access Token with Test Management scope
  • Familiarity with test suites (static, requirement-based, query-based)

Understanding Test Plan Structure

A test plan in Azure DevOps is a container with a specific structure:

Test Plan
├── Properties (name, area path, iteration, start/end dates)
├── Root Suite (automatically created)
│   ├── Child Suites (static, requirement-based, or query-based)
│   │   ├── Test Cases (work items)
│   │   └── Test Points (test case + configuration combinations)
│   └── More Child Suites...
└── Configurations (assigned at plan or suite level)

Key concepts:

  • Test Plan: The top-level container tied to an iteration path. One plan per sprint or release.
  • Test Suite: An organizational folder within the plan. Contains test cases.
  • Test Case: A work item defining steps to execute. Shared across plans.
  • Test Point: The intersection of a test case and a configuration. This is what a tester actually executes.
  • Configuration: A named combination of variables (browser, OS, environment) that defines the test context.

When you clone a test plan, you can clone the suites and optionally clone the test cases (creating new work items) or reference the same test cases (shared). For templates, referencing existing test cases is usually the right choice -- you want the same test cases across sprints, not duplicates that drift apart.

Configuration Variables and Values

Configuration variables define the dimensions of your test matrix. Set these up at the project level so they are available across all test plans.

Defining Configuration Variables

Navigate to Project Settings > Test > Configuration Variables. Create variables for each dimension you test:

Variable: Browser
Values: Chrome, Firefox, Edge, Safari

Variable: Operating System
Values: Windows 11, Windows 10, macOS Sonoma, macOS Ventura, Ubuntu 22.04

Variable: Environment
Values: Development, Staging, Production

Variable: Screen Size
Values: Desktop (1920x1080), Tablet (1024x768), Mobile (375x667)

Variable: Locale
Values: en-US, de-DE, ja-JP, es-MX

Creating Named Configurations

Named configurations combine specific variable values into a meaningful testing context. Create these in Project Settings > Test > Configurations:

Configuration: Desktop Chrome (Windows)
  Browser = Chrome
  Operating System = Windows 11
  Screen Size = Desktop (1920x1080)

Configuration: Desktop Firefox (Windows)
  Browser = Firefox
  Operating System = Windows 11
  Screen Size = Desktop (1920x1080)

Configuration: Mobile Safari (iOS)
  Browser = Safari
  Operating System = macOS Sonoma
  Screen Size = Mobile (375x667)

Configuration: Desktop Edge (Windows)
  Browser = Edge
  Operating System = Windows 11
  Screen Size = Desktop (1920x1080)

Configuration: Tablet Chrome (Android)
  Browser = Chrome
  Operating System = Windows 11
  Screen Size = Tablet (1024x768)

Configuration: Staging Environment
  Environment = Staging

Configuration: Production Smoke
  Environment = Production

Assigning Configurations to Suites

Each test suite can have one or more default configurations. When a test case is added to a suite, test points are automatically created for each assigned configuration.

A suite with 10 test cases and 3 configurations produces 30 test points. The test runner tracks results per configuration, so you see exactly which browser-OS combination fails.

Not every suite needs the full configuration matrix. Apply configurations strategically:

  • Smoke tests: All browser configurations (full cross-browser validation)
  • API tests: Single configuration (API behavior does not vary by browser)
  • Visual regression: All screen size configurations
  • Localization tests: All locale configurations
  • Integration tests: All environment configurations

Building a Test Plan Template System

Azure DevOps does not have a native "template" feature for test plans. The workaround is to maintain a "template" test plan that you clone each sprint. Here is how to build this system.

The Template Test Plan

Create a test plan named "TEMPLATE - Sprint Testing" with the suite hierarchy you want in every sprint:

TEMPLATE - Sprint Testing
├── Smoke Tests (Static Suite)
│   └── [Contains high-priority smoke test cases]
│   └── Configurations: Desktop Chrome, Desktop Firefox, Mobile Safari
├── New Feature Tests (Static Suite)
│   └── [Empty - populated per sprint]
├── Bug Verification (Query-Based Suite)
│   └── Query: "WorkItemType = 'Test Case' AND Tags CONTAINS 'bug-verification' AND Iteration = @CurrentIteration"
├── Regression - Critical Path (Query-Based Suite)
│   └── Query: "WorkItemType = 'Test Case' AND Priority <= 2 AND Tags CONTAINS 'regression'"
├── Regression - Full (Query-Based Suite)
│   └── Query: "WorkItemType = 'Test Case' AND Tags CONTAINS 'regression'"
├── Accessibility (Static Suite)
│   └── [Contains accessibility test cases]
├── Exploratory Testing (Static Suite)
│   └── [Empty - charters added per sprint]
└── Performance (Static Suite)
    └── [Contains performance benchmarks]

Manual Cloning

To clone a test plan manually:

  1. Open the Test Plans hub
  2. Click the ellipsis menu on the template plan
  3. Select "Copy test plan"
  4. Set the new plan name (e.g., "Sprint 48 Testing")
  5. Update the iteration path to the target sprint
  6. Choose "Reference existing test cases" (not clone)
  7. Click "Create"

The new plan gets all suites, configurations, and test case references from the template. Query-based suites automatically update their results for the new iteration context.

Automated Cloning with REST API

For consistent results, automate the cloning process:

var https = require("https");
var url = require("url");

var ORG = "my-organization";
var PROJECT = "my-project";
var PAT = process.env.AZURE_DEVOPS_PAT;
var API_VERSION = "7.1";

var BASE_URL = "https://dev.azure.com/" + ORG + "/" + PROJECT;
var AUTH = "Basic " + Buffer.from(":" + PAT).toString("base64");

function makeRequest(method, path, body) {
  return new Promise(function (resolve, reject) {
    var fullUrl = path.indexOf("https://") === 0 ? path : BASE_URL + path;
    var parsed = url.parse(fullUrl);
    var options = {
      hostname: parsed.hostname,
      path: parsed.path,
      method: method,
      headers: {
        "Content-Type": "application/json",
        Authorization: AUTH,
      },
    };

    var req = https.request(options, function (res) {
      var data = "";
      res.on("data", function (chunk) {
        data += chunk;
      });
      res.on("end", function () {
        if (res.statusCode >= 200 && res.statusCode < 300) {
          resolve(data ? JSON.parse(data) : null);
        } else {
          reject(new Error(method + " " + path + ": " + res.statusCode + " " + data));
        }
      });
    });

    req.on("error", reject);
    if (body) {
      req.write(JSON.stringify(body));
    }
    req.end();
  });
}

function cloneTestPlan(templatePlanId, newPlanName, iterationPath) {
  console.log("Cloning test plan " + templatePlanId + " as '" + newPlanName + "'...");

  // Step 1: Get the template plan details
  return makeRequest(
    "GET",
    "/_apis/testplan/plans/" + templatePlanId + "?api-version=" + API_VERSION
  ).then(function (templatePlan) {
    console.log("Template plan: " + templatePlan.name);

    // Step 2: Create the clone operation
    var cloneParams = {
      cloneOptions: {
        relatedLinkComment: "Cloned from template: " + templatePlan.name,
        copyAllSuites: true,
        copyAncestorHierarchy: true,
        cloneRequirements: false,
      },
      destinationTestPlan: {
        name: newPlanName,
        areaPath: templatePlan.area ? templatePlan.area.name : PROJECT,
        iteration: iterationPath,
        project: { name: PROJECT },
      },
    };

    return makeRequest(
      "POST",
      "/_apis/testplan/Plans/" + templatePlanId + "/clone?api-version=" + API_VERSION,
      cloneParams
    );
  }).then(function (cloneResult) {
    var opId = cloneResult.id || cloneResult.operationReference.id;
    console.log("Clone operation started: " + opId);

    // Step 3: Poll for completion
    return pollCloneStatus(opId);
  });
}

function pollCloneStatus(operationId) {
  return new Promise(function (resolve, reject) {
    var attempts = 0;
    var maxAttempts = 30;

    function check() {
      attempts++;
      makeRequest(
        "GET",
        "/_apis/testplan/CloneOperation/" + operationId + "?api-version=" + API_VERSION
      ).then(function (status) {
        console.log("  Status: " + status.state + " (" + attempts + "/" + maxAttempts + ")");

        if (status.state === "Succeeded") {
          console.log("Clone completed. New plan ID: " + status.destinationTestPlan.id);
          resolve(status);
        } else if (status.state === "Failed") {
          reject(new Error("Clone failed: " + JSON.stringify(status)));
        } else if (attempts >= maxAttempts) {
          reject(new Error("Clone timed out after " + maxAttempts + " checks"));
        } else {
          setTimeout(check, 2000);
        }
      }).catch(reject);
    }

    check();
  });
}

function listConfigurations() {
  return makeRequest(
    "GET",
    "/_apis/testplan/configurations?api-version=" + API_VERSION
  ).then(function (response) {
    var configs = response.value || [];
    console.log("Available configurations:");
    configs.forEach(function (config) {
      var values = config.values.map(function (v) {
        return v.name + "=" + v.value;
      }).join(", ");
      console.log("  " + config.id + ": " + config.name + " (" + values + ")");
    });
    return configs;
  });
}

function createConfiguration(name, variables) {
  return makeRequest(
    "POST",
    "/_apis/testplan/configurations?api-version=" + API_VERSION,
    {
      name: name,
      description: "Created by template automation",
      values: variables,
      isDefault: false,
    }
  );
}

function assignConfigsToSuite(planId, suiteId, configIds) {
  return makeRequest(
    "PATCH",
    "/_apis/testplan/Plans/" + planId + "/Suites/" + suiteId + "?api-version=" + API_VERSION,
    {
      defaultConfigurations: configIds.map(function (id) {
        return { id: id };
      }),
    }
  ).then(function () {
    console.log("Assigned " + configIds.length + " configuration(s) to suite " + suiteId);
  });
}

function listSuites(planId) {
  return makeRequest(
    "GET",
    "/_apis/testplan/Plans/" + planId + "/Suites?api-version=" + API_VERSION
  ).then(function (response) {
    var suites = response.value || [];
    console.log("Suites in plan " + planId + ":");
    suites.forEach(function (suite) {
      var indent = "  ".repeat(suite.level || 0);
      console.log(indent + suite.id + ": " + suite.name + " (" + suite.suiteType + ")");
    });
    return suites;
  });
}

// Main execution
var action = process.argv[2] || "help";
var templateId = parseInt(process.argv[3]) || 0;
var sprintName = process.argv[4] || "";

if (action === "clone" && templateId && sprintName) {
  var iterationPath = PROJECT + "\\" + sprintName;
  cloneTestPlan(templateId, sprintName + " Testing", iterationPath)
    .then(function (result) {
      console.log("\n=== Clone Complete ===");
      console.log("New plan ID: " + result.destinationTestPlan.id);
      console.log("Name: " + result.destinationTestPlan.name);
    })
    .catch(function (err) {
      console.error("Error: " + err.message);
      process.exit(1);
    });
} else if (action === "configs") {
  listConfigurations().catch(function (err) {
    console.error("Error: " + err.message);
    process.exit(1);
  });
} else if (action === "suites" && templateId) {
  listSuites(templateId).catch(function (err) {
    console.error("Error: " + err.message);
    process.exit(1);
  });
} else {
  console.log("Usage:");
  console.log("  node test-plan-template.js clone <templatePlanId> <sprintName>");
  console.log("  node test-plan-template.js configs");
  console.log("  node test-plan-template.js suites <planId>");
  console.log("");
  console.log("Examples:");
  console.log('  node test-plan-template.js clone 100 "Sprint 48"');
  console.log("  node test-plan-template.js configs");
  console.log("  node test-plan-template.js suites 847");
}

Running the clone:

$ node test-plan-template.js clone 100 "Sprint 48"
Cloning test plan 100 as 'Sprint 48 Testing'...
Template plan: TEMPLATE - Sprint Testing
Clone operation started: a3f2c9b1-4e7d-4a8f-b2c1-d5e6f7a8b9c0
  Status: Queued (1/30)
  Status: InProgress (2/30)
  Status: InProgress (3/30)
  Status: Succeeded (4/30)
Clone completed. New plan ID: 848

=== Clone Complete ===
New plan ID: 848
Name: Sprint 48 Testing

Configuration Matrix Design Patterns

Pattern 1: Tiered Browser Testing

Not every test case needs to run on every browser. Create configuration tiers:

Tier 1 (Smoke Tests): Chrome, Firefox, Edge, Safari
  → 4 configurations, every smoke test

Tier 2 (Feature Tests): Chrome, Firefox
  → 2 configurations, primary browser coverage

Tier 3 (API/Backend Tests): Chrome only
  → 1 configuration, browser-independent tests

This reduces total test points from N*4 to a manageable number while maintaining risk coverage. Chrome alone catches 80% of browser-specific issues; Firefox catches most of the remaining ones.

Pattern 2: Environment Progression

Test against different environments at different stages:

Sprint Testing:     Environment = Development
Integration Test:   Environment = Staging
Release Validation: Environment = Production

Create separate test plans for each stage, each with its own environment configuration. The same test cases appear in all three plans but are executed against different environments.

Pattern 3: Localization Matrix

For internationalized applications, create locale configurations:

Locale Configurations:
  English (US):  Locale = en-US
  German:        Locale = de-DE
  Japanese:      Locale = ja-JP
  Arabic (RTL):  Locale = ar-SA

Assign locale configurations to UI test suites. The tester switches the application locale before running each configuration's test points. Arabic testing is critical for catching RTL layout issues that break in most applications.

Pattern 4: Device Matrix

For responsive applications:

Device Configurations:
  Desktop:   Screen Size = 1920x1080, Input = Mouse/Keyboard
  Laptop:    Screen Size = 1366x768, Input = Mouse/Keyboard/Touchpad
  Tablet:    Screen Size = 1024x768, Input = Touch
  Phone:     Screen Size = 375x667, Input = Touch

Managing Test Plan Lifecycle

Sprint Cadence

Establish a repeatable test plan lifecycle:

  1. Sprint Planning (Day 1): Clone template plan for the new sprint. Add requirement-based suites for sprint user stories.
  2. Development Phase (Days 2-8): Testers write test cases for new features and add them to the feature suites. Update regression suites with new test cases from previous sprint bugs.
  3. Testing Phase (Days 9-12): Execute test cases. Track progress through the Test Plans dashboard.
  4. Sprint Close (Day 13): Complete all test runs. Generate sprint test report. Archive the test plan.

Archiving Old Test Plans

Test plans accumulate over sprints. Old plans clutter the Test Plans hub but contain valuable historical data. Archive by:

  1. Setting the plan's end date to the sprint end date
  2. Changing the plan state to "Inactive"
  3. Moving the plan to an "Archive" iteration path

Do not delete old test plans -- their test results feed into Analytics trend data and test case history.

Evolving the Template

The template plan should evolve as your testing practice matures:

  • Add new suites when you adopt new testing types (accessibility, security, performance)
  • Remove suites that are consistently empty or unused
  • Update query-based suite queries as your tagging and priority conventions change
  • Review configurations quarterly to add new browsers, retire old OS versions, or adjust screen sizes

Make template changes deliberately. A change to the template affects every future sprint's test plan.

Common Issues and Troubleshooting

Clone Operation Gets Stuck in "InProgress"

Large test plans with hundreds of test cases can take several minutes to clone. The clone operation runs asynchronously and can sometimes stall. If the status has not changed after 10 minutes, check the clone operation status via the REST API. If it has truly stalled, you may need to create a new clone operation. Do not attempt to clone the same plan multiple times concurrently -- this can create duplicate plans with partial data.

Configurations Not Applying to Existing Test Cases

When you add a new configuration to a suite that already contains test cases, existing test cases do not automatically get new test points for the new configuration. You need to explicitly assign the configuration to existing test points. Use the Test Plans UI: select the test cases in the suite, right-click, and select "Assign configurations." This regenerates test points for the selected configurations.

Query-Based Suites Return Wrong Results After Clone

Query-based suites that use @CurrentIteration or @Today macros will automatically adjust to the new plan's context, which is correct. However, suites using hard-coded iteration paths (e.g., Iteration Path = 'Project\Sprint 47') will still reference the old sprint. Always use @CurrentIteration in query-based suite definitions to make them portable across cloned plans.

Test Case References vs Clones

When cloning a test plan, the "Clone test cases" option creates new test case work items -- copies that are independent of the originals. This is almost never what you want. Changes to the original test case will not propagate to the clone. Always use "Reference existing test cases" unless you specifically need independent copies (e.g., for a fork in the test strategy).

Configuration Variables Not Showing for New Team Members

Configuration variables are project-scoped but require Test Plans access to view and manage. If a team member cannot see configurations when assigning them to suites, verify their access level includes Basic + Test Plans. Project Administrators can manage configuration variables regardless of their test plans access level.

Best Practices

  • Maintain a single template plan. Do not create multiple templates unless you have fundamentally different test plan structures for different teams or products. Multiple templates means multiple places to update when your testing process changes.

  • Use query-based suites with @CurrentIteration for regression. This ensures regression suites automatically pick up test cases tagged for the current sprint without manual updates. Tag test cases with "regression" when they should be included in every sprint's regression suite.

  • Keep configurations orthogonal. Each configuration should represent one meaningful testing dimension. Do not create a configuration "Chrome on Windows Desktop" that combines browser, OS, and screen size when those dimensions could vary independently. Use separate variables and let the matrix combinations handle it.

  • Review configuration coverage quarterly. Browser market share changes, new OS versions ship, and screen resolutions evolve. Update your configurations to match your actual user base. Check analytics data to see which browsers and screen sizes your users actually use, not which ones you assume they use.

  • Automate test plan creation in your sprint workflow. Use the REST API script to clone the template plan as part of your sprint kickoff automation. This ensures consistent test plans and saves 15-20 minutes of manual setup per sprint.

  • Document your template plan's structure. Write a wiki page explaining each suite in the template, what configurations it uses, and when test cases should be added to it. New team members should understand the test plan structure without having to reverse-engineer it from the Azure DevOps UI.

  • Version control your automation scripts. The test plan template clone script, configuration setup script, and any other test management automation should live in your repository alongside the application code. This ensures the automation evolves with the project and survives team member turnover.

  • Set default configurations at the suite level, not the plan level. Different suites need different configurations. Smoke tests need full browser coverage; API tests need none. Setting defaults at the plan level forces all suites to use the same configurations, which creates unnecessary test points.

References

Powered by Contentful