Test Plans

Test Plan Templates and Configurations

Create reusable test plan templates with configurations for browsers, environments, and sprint-based test cycles in Azure DevOps

Test Plan Templates and Configurations

Test plans without configurations are just wishlists. Configurations turn a vague "test the login page" into a concrete matrix of browsers, operating systems, and environments that your team actually executes against. In Azure DevOps, test plan templates and configurations give you a repeatable, automatable system for spinning up structured test cycles every sprint. This article covers how to build that system from the ground up, including a Node.js automation script that does the heavy lifting for you.

Prerequisites

Before diving in, make sure you have the following in place:

  • An Azure DevOps organization and project with Test Plans enabled (requires Basic + Test Plans license or equivalent)
  • Node.js v14 or later installed locally
  • A Personal Access Token (PAT) with Test Management (Read & Write) and Work Items (Read & Write) scopes
  • Familiarity with Azure DevOps REST APIs (we will use the azure-devops-node-api package)
  • At least one test suite with a handful of test cases to experiment with

Understanding Test Configurations in Azure DevOps

A test configuration in Azure DevOps is a named combination of configuration variables. Think of it as an answer to the question: "Under what conditions should this test case be executed?"

Every test configuration consists of one or more configuration variables, each with an assigned value. Azure DevOps ships with two default variables:

Variable Default Values
Operating System Windows 10, Windows 11, macOS, Linux
Browser Microsoft Edge, Google Chrome, Firefox, Safari

A configuration might look like this:

  • Name: Windows 11 - Chrome
  • Variables: Operating System = Windows 11, Browser = Google Chrome

When you assign that configuration to a test case, Azure DevOps creates a test point — a unique combination of test case + configuration that a tester executes and marks as passed or failed. One test case with three configurations produces three test points. This is the foundation of matrix testing in Azure DevOps.

Creating and Managing Configurations

Through the UI

Navigate to Test Plans > Configurations in your project. From here you can create new configurations, edit existing ones, and manage the variable catalog.

Through the REST API

The UI works fine for small teams, but it does not scale. Here is how to create a configuration programmatically:

var azdev = require("azure-devops-node-api");

var orgUrl = "https://dev.azure.com/your-org";
var token = process.env.AZURE_DEVOPS_PAT;

var authHandler = azdev.getPersonalAccessTokenHandler(token);
var connection = new azdev.WebApi(orgUrl, authHandler);

function createConfiguration(projectName, configName, variables) {
    return connection.getTestApi().then(function(testApi) {
        var config = {
            name: configName,
            description: "Auto-generated configuration",
            isDefault: false,
            values: variables.map(function(v) {
                return { name: v.name, value: v.value };
            })
        };
        return testApi.createTestConfiguration(config, projectName);
    });
}

// Usage
createConfiguration("MyProject", "Windows 11 - Chrome", [
    { name: "Operating System", value: "Windows 11" },
    { name: "Browser", value: "Google Chrome" }
]).then(function(result) {
    console.log("Created configuration:", result.id, result.name);
}).catch(function(err) {
    console.error("Failed:", err.message);
});

Configuration Variables

Configuration variables are the building blocks. Before you can create configurations, you need the variables defined at the project level. Azure DevOps provides the defaults, but real projects need more:

Variable Example Values
Operating System Windows 11, macOS Sonoma, Ubuntu 22.04
Browser Chrome, Firefox, Edge, Safari
Environment Dev, Staging, Production
Device Type Desktop, Tablet, Mobile
Database PostgreSQL 15, SQL Server 2022, MySQL 8
API Version v1, v2, v3

You create custom variables through the REST API by posting to the test configuration variables endpoint:

function createConfigurationVariable(projectName, variableName, values) {
    var url = orgUrl + "/" + projectName + "/_apis/test/configurationvariables?api-version=7.1";

    var requestBody = {
        name: variableName,
        description: "Custom variable for " + variableName,
        allowedValues: values
    };

    // Using node-fetch or the REST client directly
    var fetch = require("node-fetch");
    return fetch(url, {
        method: "POST",
        headers: {
            "Content-Type": "application/json",
            "Authorization": "Basic " + Buffer.from(":" + token).toString("base64")
        },
        body: JSON.stringify(requestBody)
    }).then(function(res) {
        return res.json();
    });
}

createConfigurationVariable("MyProject", "Environment", [
    "Development", "Staging", "Production"
]).then(function(result) {
    console.log("Variable created:", result.name);
});

Assigning Configurations to Test Cases

Configurations are assigned at the test suite level, not the individual test case level. When you assign configurations to a suite, every test case in that suite gets test points for each configuration.

function assignConfigurations(projectName, planId, suiteId, configIds) {
    return connection.getTestApi().then(function(testApi) {
        var configs = configIds.map(function(id) {
            return { id: id };
        });
        return testApi.addTestCasesToSuite(
            projectName,
            planId,
            suiteId,
            undefined,
            configs
        );
    });
}

There is a subtlety here that trips people up. If you have a suite with 10 test cases and you assign 4 configurations, you now have 40 test points. If a tester marks "Windows 11 - Chrome" as passed for test case #3, the other three configurations for that same test case remain in their current state. This granularity is the whole point of configurations.

Bulk Configuration Assignment

For large test plans with dozens of suites, assigning configurations one at a time is painful. The REST API supports bulk operations:

function bulkAssignConfigurations(projectName, planId, suiteIds, configIds) {
    var promises = suiteIds.map(function(suiteId) {
        var url = orgUrl + "/" + projectName +
            "/_apis/testplan/Plans/" + planId +
            "/Suites/" + suiteId + "/TestCase?api-version=7.1";

        return fetch(url, {
            method: "PATCH",
            headers: {
                "Content-Type": "application/json",
                "Authorization": "Basic " + Buffer.from(":" + token).toString("base64")
            },
            body: JSON.stringify({
                configurations: configIds.map(function(id) {
                    return { configurationId: id };
                })
            })
        }).then(function(res) { return res.json(); });
    });

    return Promise.all(promises);
}

Test Plan Settings and Hierarchies

A test plan in Azure DevOps has a strict hierarchy:

Test Plan
├── Static Suite (manual grouping)
│   ├── Test Case 1 (with configurations → test points)
│   └── Test Case 2
├── Requirement-based Suite (linked to user stories)
│   └── Test cases auto-populated from linked work items
└── Query-based Suite (dynamic, based on work item query)
    └── Test cases auto-populated from query results

Each test plan has settings that control its behavior:

  • Iteration path — ties the plan to a specific sprint
  • Area path — scopes the plan to a team or feature area
  • Build/release pipeline — associates test results with specific builds
  • Start and end dates — defines the testing window
  • State — Active or Inactive

These settings matter because they determine how test results roll up in reporting. A test plan tied to Sprint 45 with a specific build pipeline gives you traceability from requirement to test to build. That traceability is what auditors and release managers care about.

Cloning Test Plans for New Releases

The most common workflow I see in mature teams is cloning. You build a comprehensive test plan once, then clone it for each sprint or release. Cloning preserves:

  • The suite hierarchy
  • Test case references (not copies — they point to the same work items)
  • Configuration assignments
  • Outcome history is reset (all test points start as "Not Run")
function cloneTestPlan(projectName, sourcePlanId, newPlanName, iterationPath) {
    var url = orgUrl + "/" + projectName +
        "/_apis/testplan/Plans/CloneOperation?api-version=7.1";

    var body = {
        destinationTestPlan: {
            name: newPlanName,
            areaPath: projectName,
            iteration: iterationPath
        },
        options: {
            copyAncestorHierarchy: true,
            copyAllSuites: true,
            cloneRequirements: false,
            relatedLinkComment: "Cloned from plan " + sourcePlanId
        },
        suiteIds: [] // empty means clone all suites
    };

    var fetch = require("node-fetch");
    return fetch(url, {
        method: "POST",
        headers: {
            "Content-Type": "application/json",
            "Authorization": "Basic " + Buffer.from(":" + token).toString("base64")
        },
        body: JSON.stringify(body)
    }).then(function(res) {
        return res.json();
    });
}

cloneTestPlan("MyProject", 42, "Sprint 46 - Regression", "MyProject\\Sprint 46")
    .then(function(result) {
        console.log("Clone operation started:", result.opId);
        console.log("Status:", result.state);
    });

Cloning is an asynchronous operation. The API returns an operation ID that you poll until it completes. For large plans with hundreds of test cases, this can take a few minutes.

Managing Test Environments

Test environments in Azure DevOps are separate from configurations, but they work together. An environment represents a physical or virtual deployment target — a staging server, a device farm, a browser stack instance.

You link environments to test runs (not test plans) when you execute tests. The combination of configuration + environment gives you full context: "This test case was run on Windows 11 - Chrome against the Staging environment."

function createTestEnvironment(projectName, environmentName) {
    return connection.getTestApi().then(function(testApi) {
        var environment = {
            name: environmentName,
            description: "Test environment: " + environmentName
        };
        return testApi.createTestEnvironment(environment, projectName);
    });
}

Configuration Matrices

When you have multiple configuration variables, the full matrix can explode quickly. Three operating systems times four browsers times three environments gives you 36 configurations. Nobody wants to test 36 variations of every test case.

The practical approach is to use a risk-based subset:

  1. Full matrix for critical paths (login, checkout, payment)
  2. Reduced matrix for standard features (pairwise testing)
  3. Single configuration for low-risk areas

You can model this by creating different suites within your test plan and assigning different configuration sets to each suite:

var criticalConfigs = [1, 2, 3, 4, 5, 6]; // Full browser × OS matrix
var standardConfigs = [1, 3, 5];            // Chrome-Win, Firefox-Mac, Edge-Linux
var lowRiskConfigs = [1];                    // Chrome-Win only

function assignConfigsBySuiteRisk(projectName, planId, suiteRiskMap) {
    var promises = Object.keys(suiteRiskMap).map(function(suiteId) {
        var risk = suiteRiskMap[suiteId];
        var configs;

        if (risk === "critical") {
            configs = criticalConfigs;
        } else if (risk === "standard") {
            configs = standardConfigs;
        } else {
            configs = lowRiskConfigs;
        }

        return bulkAssignConfigurations(projectName, planId, [suiteId], configs);
    });

    return Promise.all(promises);
}

Test Plan Templates via REST API

Azure DevOps does not have a first-class "test plan template" feature. There is no template object in the data model. But you can build your own template system using a JSON definition and the REST API.

The pattern is straightforward: define your template as a JSON structure, then write code that reads the template and creates the plan, suites, and configuration assignments.

{
    "name": "Sprint Regression - {sprintName}",
    "iteration": "{iterationPath}",
    "suites": [
        {
            "name": "Smoke Tests",
            "type": "static",
            "risk": "critical",
            "testCaseIds": [1001, 1002, 1003, 1004, 1005]
        },
        {
            "name": "Authentication",
            "type": "static",
            "risk": "critical",
            "testCaseIds": [2001, 2002, 2003, 2004]
        },
        {
            "name": "User Management",
            "type": "requirement",
            "risk": "standard",
            "requirementIds": [3001, 3002]
        },
        {
            "name": "Reporting",
            "type": "static",
            "risk": "low",
            "testCaseIds": [4001, 4002, 4003]
        }
    ],
    "configurations": {
        "critical": [
            { "name": "Win11-Chrome", "vars": { "OS": "Windows 11", "Browser": "Chrome" } },
            { "name": "Win11-Firefox", "vars": { "OS": "Windows 11", "Browser": "Firefox" } },
            { "name": "macOS-Safari", "vars": { "OS": "macOS Sonoma", "Browser": "Safari" } },
            { "name": "macOS-Chrome", "vars": { "OS": "macOS Sonoma", "Browser": "Chrome" } }
        ],
        "standard": [
            { "name": "Win11-Chrome", "vars": { "OS": "Windows 11", "Browser": "Chrome" } },
            { "name": "macOS-Safari", "vars": { "OS": "macOS Sonoma", "Browser": "Safari" } }
        ],
        "low": [
            { "name": "Win11-Chrome", "vars": { "OS": "Windows 11", "Browser": "Chrome" } }
        ]
    }
}

Automating Test Plan Creation for Sprints

The real payoff comes when you wire this into your sprint cadence. Instead of manually creating test plans at the start of each sprint, you run a script (or trigger it from a pipeline) that reads the template and builds everything.

Here is a Node.js script that handles the sprint iteration lookup automatically:

var azdev = require("azure-devops-node-api");
var fetch = require("node-fetch");

var orgUrl = "https://dev.azure.com/your-org";
var token = process.env.AZURE_DEVOPS_PAT;
var project = process.env.AZURE_DEVOPS_PROJECT || "MyProject";

var authHandler = azdev.getPersonalAccessTokenHandler(token);
var connection = new azdev.WebApi(orgUrl, authHandler);

function getHeaders() {
    return {
        "Content-Type": "application/json",
        "Authorization": "Basic " + Buffer.from(":" + token).toString("base64")
    };
}

function getCurrentIteration() {
    var url = orgUrl + "/" + project +
        "/_apis/work/teamsettings/iterations?$timeframe=current&api-version=7.1";

    return fetch(url, { headers: getHeaders() })
        .then(function(res) { return res.json(); })
        .then(function(data) {
            if (data.value && data.value.length > 0) {
                return data.value[0];
            }
            throw new Error("No current iteration found");
        });
}

function createTestPlan(planName, iterationPath, startDate, endDate) {
    var url = orgUrl + "/" + project + "/_apis/testplan/plans?api-version=7.1";

    var body = {
        name: planName,
        area: { name: project },
        iteration: iterationPath,
        startDate: startDate,
        endDate: endDate
    };

    return fetch(url, {
        method: "POST",
        headers: getHeaders(),
        body: JSON.stringify(body)
    }).then(function(res) { return res.json(); });
}

function createTestSuite(planId, parentSuiteId, suiteName, suiteType) {
    var url = orgUrl + "/" + project +
        "/_apis/testplan/Plans/" + planId +
        "/suites?api-version=7.1";

    var body = {
        name: suiteName,
        suiteType: suiteType || "staticTestSuite",
        parentSuite: { id: parentSuiteId }
    };

    return fetch(url, {
        method: "POST",
        headers: getHeaders(),
        body: JSON.stringify(body)
    }).then(function(res) { return res.json(); });
}

function addTestCasesToSuite(planId, suiteId, testCaseIds) {
    var idString = testCaseIds.join(",");
    var url = orgUrl + "/" + project +
        "/_apis/testplan/Plans/" + planId +
        "/Suites/" + suiteId + "/TestCase?api-version=7.1";

    var body = testCaseIds.map(function(id) {
        return { workItem: { id: id } };
    });

    return fetch(url, {
        method: "POST",
        headers: getHeaders(),
        body: JSON.stringify(body)
    }).then(function(res) { return res.json(); });
}

function ensureConfiguration(configName, variables) {
    var url = orgUrl + "/" + project +
        "/_apis/testplan/configurations?api-version=7.1";

    return fetch(url, { headers: getHeaders() })
        .then(function(res) { return res.json(); })
        .then(function(data) {
            var existing = data.value.find(function(c) {
                return c.name === configName;
            });
            if (existing) {
                return existing;
            }

            var createUrl = orgUrl + "/" + project +
                "/_apis/testplan/configurations?api-version=7.1";

            var body = {
                name: configName,
                values: Object.keys(variables).map(function(key) {
                    return { name: key, value: variables[key] };
                })
            };

            return fetch(createUrl, {
                method: "POST",
                headers: getHeaders(),
                body: JSON.stringify(body)
            }).then(function(res) { return res.json(); });
        });
}

function assignConfigsToSuite(planId, suiteId, configIds) {
    var url = orgUrl + "/" + project +
        "/_apis/testplan/Plans/" + planId +
        "/Suites/" + suiteId + "/configurations?api-version=7.1";

    var body = configIds.map(function(id) {
        return { id: id };
    });

    return fetch(url, {
        method: "PATCH",
        headers: getHeaders(),
        body: JSON.stringify(body)
    }).then(function(res) { return res.json(); });
}

// --- Main orchestration ---

function createSprintTestPlan(template) {
    var plan;
    var configMap = {};

    console.log("Fetching current iteration...");

    return getCurrentIteration()
        .then(function(iteration) {
            var planName = template.name.replace("{sprintName}", iteration.name);
            var iterPath = iteration.path;
            var startDate = iteration.attributes.startDate;
            var endDate = iteration.attributes.finishDate;

            console.log("Creating test plan:", planName);
            return createTestPlan(planName, iterPath, startDate, endDate);
        })
        .then(function(createdPlan) {
            plan = createdPlan;
            console.log("Test plan created with ID:", plan.id);

            // Ensure all configurations exist
            var allConfigs = [];
            ["critical", "standard", "low"].forEach(function(risk) {
                if (template.configurations[risk]) {
                    template.configurations[risk].forEach(function(cfg) {
                        allConfigs.push(cfg);
                    });
                }
            });

            // Deduplicate by name
            var seen = {};
            var unique = allConfigs.filter(function(cfg) {
                if (seen[cfg.name]) return false;
                seen[cfg.name] = true;
                return true;
            });

            return unique.reduce(function(chain, cfg) {
                return chain.then(function() {
                    return ensureConfiguration(cfg.name, cfg.vars)
                        .then(function(result) {
                            configMap[cfg.name] = result.id;
                            console.log("Configuration ready:", cfg.name, "→", result.id);
                        });
                });
            }, Promise.resolve());
        })
        .then(function() {
            // Create suites sequentially
            var rootSuiteId = plan.rootSuite.id;

            return template.suites.reduce(function(chain, suiteDef) {
                return chain.then(function() {
                    console.log("Creating suite:", suiteDef.name);
                    return createTestSuite(plan.id, rootSuiteId, suiteDef.name, suiteDef.type)
                        .then(function(suite) {
                            var suiteId = suite.id;

                            // Add test cases if static suite
                            var addCases = Promise.resolve();
                            if (suiteDef.testCaseIds && suiteDef.testCaseIds.length > 0) {
                                addCases = addTestCasesToSuite(plan.id, suiteId, suiteDef.testCaseIds);
                            }

                            return addCases.then(function() {
                                // Assign configurations based on risk level
                                var risk = suiteDef.risk || "low";
                                var configDefs = template.configurations[risk] || [];
                                var configIds = configDefs.map(function(cfg) {
                                    return configMap[cfg.name];
                                }).filter(Boolean);

                                if (configIds.length > 0) {
                                    console.log("Assigning", configIds.length, "configs to", suiteDef.name);
                                    return assignConfigsToSuite(plan.id, suiteId, configIds);
                                }
                            });
                        });
                });
            }, Promise.resolve());
        })
        .then(function() {
            console.log("\nTest plan created successfully!");
            console.log("Plan ID:", plan.id);
            console.log("URL:", orgUrl + "/" + project + "/_testPlans/define?planId=" + plan.id);
            return plan;
        });
}

// --- Template definition ---

var sprintTemplate = {
    name: "Sprint Regression - {sprintName}",
    suites: [
        {
            name: "Smoke Tests",
            type: "staticTestSuite",
            risk: "critical",
            testCaseIds: [1001, 1002, 1003, 1004, 1005]
        },
        {
            name: "Authentication & Authorization",
            type: "staticTestSuite",
            risk: "critical",
            testCaseIds: [2001, 2002, 2003, 2004]
        },
        {
            name: "Core Workflows",
            type: "staticTestSuite",
            risk: "standard",
            testCaseIds: [3001, 3002, 3003, 3004, 3005]
        },
        {
            name: "Settings & Preferences",
            type: "staticTestSuite",
            risk: "low",
            testCaseIds: [4001, 4002, 4003]
        }
    ],
    configurations: {
        critical: [
            { name: "Win11-Chrome", vars: { "Operating System": "Windows 11", "Browser": "Chrome" } },
            { name: "Win11-Firefox", vars: { "Operating System": "Windows 11", "Browser": "Firefox" } },
            { name: "Win11-Edge", vars: { "Operating System": "Windows 11", "Browser": "Edge" } },
            { name: "macOS-Safari", vars: { "Operating System": "macOS Sonoma", "Browser": "Safari" } },
            { name: "macOS-Chrome", vars: { "Operating System": "macOS Sonoma", "Browser": "Chrome" } },
            { name: "Ubuntu-Firefox", vars: { "Operating System": "Ubuntu 22.04", "Browser": "Firefox" } }
        ],
        standard: [
            { name: "Win11-Chrome", vars: { "Operating System": "Windows 11", "Browser": "Chrome" } },
            { name: "macOS-Safari", vars: { "Operating System": "macOS Sonoma", "Browser": "Safari" } },
            { name: "Ubuntu-Firefox", vars: { "Operating System": "Ubuntu 22.04", "Browser": "Firefox" } }
        ],
        low: [
            { name: "Win11-Chrome", vars: { "Operating System": "Windows 11", "Browser": "Chrome" } }
        ]
    }
};

// Run it
createSprintTestPlan(sprintTemplate)
    .catch(function(err) {
        console.error("Error creating test plan:", err.message);
        process.exit(1);
    });

Run it with:

export AZURE_DEVOPS_PAT="your-pat-here"
export AZURE_DEVOPS_PROJECT="MyProject"
node create-sprint-plan.js

Cross-Project Test Plan Sharing

Azure DevOps does not support sharing test plans across projects directly. Test plans live within a single project. But there are practical workarounds:

  1. Shared test case work items — If you use a shared project for test assets, you can reference test case work items from that project in test suites across other projects using requirement-based suites with cross-project links.

  2. Template-driven creation — Store your JSON templates in a shared repository. Each project pulls the template and generates its own local test plan. This is the approach I recommend because it gives each team ownership of their test plan while maintaining structural consistency.

  3. REST API cloning — Write a script that reads a test plan from Project A, extracts the structure and test case IDs, and recreates it in Project B. This works but requires test cases to exist in both projects (or use shared queries).

function exportTestPlanStructure(sourceProject, planId) {
    var url = orgUrl + "/" + sourceProject +
        "/_apis/testplan/Plans/" + planId +
        "/suites?asTreeView=true&api-version=7.1";

    return fetch(url, { headers: getHeaders() })
        .then(function(res) { return res.json(); })
        .then(function(data) {
            // Transform into a portable template format
            var template = {
                name: "Cloned - " + data.value[0].name,
                suites: data.value.map(function(suite) {
                    return {
                        name: suite.name,
                        type: suite.suiteType,
                        testCaseCount: suite.testCaseCount
                    };
                })
            };
            return template;
        });
}

Common Issues and Troubleshooting

1. "The test configuration does not exist" After Cloning

When you clone a test plan to a different project, configurations do not come along. The cloned plan references configuration IDs that exist in the source project but not the target. Fix: Create matching configurations in the target project first, then update the cloned suites to point to the new configuration IDs. The automation script above handles this with ensureConfiguration.

2. Test Points Not Appearing After Configuration Assignment

Configurations assigned to an empty suite produce zero test points. Test points are the intersection of test cases and configurations — you need both. Add test cases to the suite first, then assign configurations. If you do it in reverse, the configuration sticks but no test points generate. Re-saving the suite or adding a test case after the fact will trigger point generation.

3. Clone Operation Hangs or Fails Silently

The clone API returns an operation ID immediately. If you do not poll for completion, you will not know it failed. Common failure reasons: the source plan has suites linked to work item queries that reference fields or area paths not available in the target context. Always poll the operation status:

function pollCloneStatus(projectName, operationId) {
    var url = orgUrl + "/" + projectName +
        "/_apis/testplan/Plans/CloneOperation/" + operationId +
        "?api-version=7.1";

    return fetch(url, { headers: getHeaders() })
        .then(function(res) { return res.json(); })
        .then(function(data) {
            console.log("Clone status:", data.state, "(" + data.completionPercentage + "%)");
            if (data.state === "succeeded") {
                return data;
            }
            if (data.state === "failed") {
                throw new Error("Clone failed: " + JSON.stringify(data));
            }
            // Still running, poll again after delay
            return new Promise(function(resolve) {
                setTimeout(function() {
                    resolve(pollCloneStatus(projectName, operationId));
                }, 3000);
            });
        });
}

4. PAT Permissions Insufficient for Test Plan Operations

The azure-devops-node-api documentation suggests "Work Items (Read & Write)" is enough. It is not. For test plan operations you specifically need:

  • Test Management — Read, Write, and Execute
  • Work Items — Read and Write (for test case work items)
  • Project and Team — Read (for iteration path lookups)

If you get 403 errors, check your PAT scopes first. This catches people every time.

5. Configuration Variable Values Not Matching

Azure DevOps configuration variables are case-sensitive for values. "Chrome" and "chrome" are different values. If your template says "Chrome" but the existing configuration variable has "Google Chrome", the API creates a new variable value instead of reusing the existing one. Standardize your naming conventions early and enforce them in your templates.

6. Rate Limiting on Bulk Operations

When creating plans with many suites and configurations, you can hit the Azure DevOps REST API rate limits. The API allows approximately 200 requests per minute for PAT-authenticated calls. The sequential reduce pattern in the main script avoids parallel floods, but if you are creating plans for multiple projects simultaneously, add a delay between API calls:

function delay(ms) {
    return new Promise(function(resolve) {
        setTimeout(resolve, ms);
    });
}

// Insert between API calls
return someApiCall()
    .then(function(result) { return delay(500).then(function() { return result; }); })
    .then(function(result) { return nextApiCall(result); });

Best Practices

  • Version your templates in source control. Store your JSON test plan templates alongside your application code. When the feature set changes, the template changes in the same pull request. This is the single biggest improvement you can make to test plan management.

  • Use risk-based configuration matrices. Not every test case needs every configuration. Assign full matrices to critical paths and minimal configurations to low-risk features. This keeps your test point count manageable and your testers focused.

  • Automate plan creation as a pipeline step. Trigger test plan creation from your CI/CD pipeline at the start of each sprint. A pipeline task that runs node create-sprint-plan.js ensures consistency without relying on someone remembering to create the plan manually.

  • Keep configurations coarse-grained. Resist the urge to create configurations for every possible combination. "Windows 11 - Chrome - 1920x1080 - English - Eastern Time" is too specific. Encode only the variables that genuinely affect test behavior: OS, browser, and environment are usually enough.

  • Separate regression plans from feature plans. Regression plans are templated and repeated every sprint. Feature-specific test plans are one-time creations tied to a user story or epic. Mixing them makes both harder to manage. Two plan types, two templates, two workflows.

  • Establish naming conventions for configurations. Use a consistent pattern like {OS}-{Browser} or {Environment}-{OS}-{Browser}. When your team has 30 configurations, naming consistency is the difference between finding what you need and recreating what already exists.

  • Clean up inactive plans regularly. Test plans from six months ago clutter the UI and confuse new team members. Archive or delete plans from completed sprints. Keep the last two or three for reference and remove the rest. The data in test runs and results persists independently of the plan.

  • Document your configuration variable catalog. Maintain a wiki page or markdown file listing every configuration variable, its allowed values, and which teams use it. This prevents variable sprawl where "OS", "Operating System", and "OperatingSystem" all exist as separate variables meaning the same thing.

  • Test your templates before sprint day. Run the creation script against a scratch project or a dedicated "Template Test" plan. Validate that suites, test cases, and configurations all resolve correctly. Finding out your template is broken on day one of the sprint is a bad way to start.

References

Powered by Contentful