Test Case Management in Azure Test Plans
Manage test cases in Azure Test Plans with test suites, shared steps, parameterized tests, and REST API automation
Test Case Management in Azure Test Plans
Azure Test Plans provides a structured environment for managing manual and exploratory testing across your software projects. It integrates directly with Azure Boards work items, pipelines, and repositories, giving teams a single place to define what needs to be tested, track results, and measure coverage. In this article, we will walk through every aspect of test case management — from creating your first test plan to automating bulk operations with the REST API.
Prerequisites
Before you begin, make sure you have the following in place:
- An Azure DevOps organization and project (any process template works)
- Basic + Test Plans access level assigned to your user, or a Visual Studio Enterprise subscription
- Node.js v14 or later installed locally (for the REST API examples)
- A Personal Access Token (PAT) with Test Management (Read & Write) and Work Items (Read & Write) scopes
- Familiarity with Azure Boards work items (user stories, bugs, tasks)
Note that the Test Plans feature requires a paid access level. The Basic access level alone does not include the ability to author or manage test cases through the Test Plans hub. If you are evaluating the feature, Azure DevOps offers a free trial of the Test Plans extension.
Azure Test Plans Overview
Azure Test Plans sits under the Test Plans hub in Azure DevOps. It is the dedicated surface for planning, executing, and tracking manual testing activities. The hierarchy is straightforward:
- Test Plan — A container that maps to a milestone, sprint, or release. It groups related test suites together.
- Test Suite — A folder-like grouping within a plan. Suites can be static (manually curated), requirement-based (auto-populated from a work item query), or query-based (dynamically populated from a work item query).
- Test Case — A work item of type "Test Case" that defines a sequence of steps and expected results. Test cases live inside suites.
This three-level hierarchy gives you enough structure to organize thousands of test cases without becoming overly complex. A typical project might have one test plan per sprint, with suites broken out by feature area.
Creating Test Plans and Test Suites
To create a test plan, navigate to Test Plans > + New Test Plan. Give it a name that ties to a release or sprint — something like "Sprint 42 Regression" or "v2.5 Release Validation." You can set the area path and iteration to scope the plan to a specific team and sprint.
Every test plan comes with a default root suite. From there you can add child suites:
- Static suites are simple folders. You drag and drop test cases into them manually. Use these when you want full control over which cases belong to a suite.
- Requirement-based suites are linked to a specific user story, feature, or product backlog item. When you create one, you select a work item and the suite automatically includes any test cases linked to it. This is the best way to track test coverage against requirements.
- Query-based suites run a work item query and include all matching test cases. These are useful for creating suites like "All P1 Test Cases" or "All Login Tests" that update dynamically.
A practical structure might look like this:
Sprint 42 Regression (Test Plan)
├── Authentication (Static Suite)
│ ├── TC-101: Login with valid credentials
│ ├── TC-102: Login with invalid password
│ └── TC-103: Password reset flow
├── User Story 4521: Shopping Cart (Requirement-Based Suite)
│ ├── TC-201: Add item to cart
│ ├── TC-202: Remove item from cart
│ └── TC-203: Update quantity
└── All Critical Tests (Query-Based Suite)
└── [dynamically populated]
Writing Effective Test Cases
A test case is a work item with a structured list of steps. Each step has an Action and an Expected Result. The key to writing good test cases is specificity without rigidity. You want someone unfamiliar with the feature to be able to follow the steps and know exactly what "pass" looks like.
Here is an example of a well-written test case:
Test Case: Login with valid credentials
| Step | Action | Expected Result |
|---|---|---|
| 1 | Navigate to https://app.example.com/login | Login page loads with email and password fields visible |
| 2 | Enter "[email protected]" in the email field | Email field accepts input |
| 3 | Enter "SecurePass123!" in the password field | Password field shows masked characters |
| 4 | Click the "Sign In" button | User is redirected to the dashboard. Welcome message displays "Hello, Test User" |
Some guidelines for writing test cases:
- Start each action with a verb: "Navigate to," "Click," "Enter," "Verify"
- Make expected results observable and measurable — avoid vague statements like "it works"
- Keep each step atomic. If a step involves multiple actions, split it
- Include test data directly in the steps or reference a parameter table
- Add preconditions in the test case summary field (e.g., "User must have an active account")
Test Case Fields and Metadata
Beyond the steps, test cases have several fields that help with organization and filtering:
- Priority — P1 through P4. Use this to drive which tests run in smoke vs. regression suites.
- State — Design, Ready, Closed. Move a test case to "Ready" once it has been reviewed and approved.
- Assigned To — The person responsible for executing the test.
- Automation Status — Not Automated, Planned, Automated. Track your automation progress.
- Area Path — Aligns the test case with a feature area or team.
- Iteration Path — Ties the test case to a sprint.
- Tags — Free-form labels like "smoke," "regression," "P1," "api," "ui."
Use the Automation Status field religiously. It gives you a clear picture of how much of your test suite is still manual and where to invest in automation next. Query-based suites filtered on Automation Status = Not Automated make great backlogs for your automation sprint.
Shared Steps for Reusable Procedures
Many test cases share common sequences — logging in, navigating to a specific page, setting up test data. Instead of duplicating these steps across dozens of test cases, use Shared Steps.
A Shared Step is its own work item that contains a sequence of steps. You insert it into any test case as a single reference. When you update the shared step, every test case that references it gets the update automatically.
For example, create a shared step called "Login as Admin User":
| Step | Action | Expected Result |
|---|---|---|
| 1 | Navigate to /login | Login page loads |
| 2 | Enter [email protected] / AdminPass123! | Credentials entered |
| 3 | Click Sign In | Dashboard loads with admin menu visible |
Now any test case that requires admin access can insert this shared step instead of repeating those three steps. This reduces maintenance overhead significantly. When the login flow changes, you update one shared step instead of fifty test cases.
Parameterized Test Cases
Parameterized test cases let you run the same steps with different data sets. This is essential for data-driven testing scenarios like input validation, boundary testing, and multi-configuration testing.
In Azure Test Plans, you add parameters using the @parameterName syntax in your test steps. Then you define a parameter table with rows of values. Each row becomes a separate iteration when you run the test.
Example: Validate search results
| Step | Action | Expected Result |
|---|---|---|
| 1 | Navigate to the search page | Search page loads |
| 2 | Enter @searchTerm in the search box |
Search term appears in the input |
| 3 | Click Search | Results page shows @expectedCount results |
| 4 | Verify the first result contains @expectedTitle |
Title matches |
Parameter Table:
| @searchTerm | @expectedCount | @expectedTitle |
|---|---|---|
| Node.js API | 15 | Building REST APIs with Node.js |
| Azure DevOps | 8 | Getting Started with Azure DevOps |
| invalid-xyz-query | 0 | (no results message displayed) |
When you execute this test case, the test runner prompts you to run through the steps three times — once for each row. Each iteration is tracked independently, so you can pass two and fail one.
Organizing with Requirement-Based and Query-Based Suites
Requirement-based suites are the backbone of traceability. When a product owner asks "Is this user story tested?" you can answer immediately because the suite shows all test cases linked to that story and their pass/fail status.
To create a requirement-based suite:
- Right-click a suite in your test plan and select New requirement-based suite
- Run a query to find the user stories or features you want to track
- Select the work items and add them
- Each selected work item becomes its own suite node
Any test case linked to the requirement (via the "Tests" link type) automatically appears in the suite. When you add a new test case to the suite, Azure DevOps creates the link for you.
Query-based suites are useful for cross-cutting views. Common query-based suites include:
- All test cases tagged "smoke" — for quick validation after deployments
- All test cases with Priority = 1 — for critical path testing
- All test cases in area path "Payments" — for domain-specific regression
- All test cases with Automation Status = Not Automated — for automation backlog
These suites refresh automatically. When someone creates a new P1 test case, it shows up in the "All P1 Tests" suite without any manual curation.
Assigning Testers and Configurations
Each test case in a suite can be assigned to a tester. In the Test Plans grid view, use the Assign testers option to bulk-assign cases to team members. You can also assign by right-clicking individual cases.
Configurations add another dimension. A configuration represents a specific environment or device — "Windows 11 / Chrome," "macOS / Safari," "Android 14 / Mobile." When you assign configurations to a suite, each test case generates a test point for every configuration. A suite with 10 test cases and 3 configurations produces 30 test points.
This is useful for cross-browser and cross-platform testing. Each tester can filter their test points by configuration and run only the cases assigned to their platform.
To manage configurations:
- Go to Test Plans > Parameters (in older navigation) or Project Settings > Test > Configurations
- Create configuration variables (e.g., Browser = Chrome, Firefox, Safari)
- Create configurations that combine variables (e.g., "Windows + Chrome," "Mac + Safari")
- Assign configurations to suites in your test plan
Running Manual Tests
When it is time to execute, select one or more test cases in a suite and click Run. The test runner opens — either the web-based runner or the desktop Test Runner application.
The runner walks you through each step. For each step you mark it as Pass, Fail, or Not Applicable. If a step fails, you can:
- Create a bug directly from the runner (it auto-attaches screenshots and system info)
- Add comments explaining the failure
- Capture screenshots or screen recordings
- File the bug and link it to the test case in a single action
The overall test case outcome is determined by the individual step results. If any step fails, the test case is marked as failed.
For exploratory testing, use the Exploratory Testing mode. This lets testers explore the application freely without predefined steps, capturing bugs and observations along the way. Exploratory sessions can be linked back to requirements for traceability.
Tracking Test Results
The Charts and Progress Report tabs in Test Plans give you visibility into test execution status. The default charts show:
- Outcome by suite — How many tests passed, failed, or are not yet run per suite
- Outcome by tester — Workload distribution and completion status per tester
- Outcome by priority — Whether your critical tests are green
- Outcome trend — Pass rate over time across test runs
You can create custom charts by pivoting on any test case field. For release readiness, I typically create a chart that groups by Area Path and shows pass/fail/not-run percentages. If any feature area is below 90% pass rate, we investigate before shipping.
Test runs are also visible in the Runs tab. Each run captures the timestamp, tester, configuration, and detailed step-level results. This historical data is invaluable for regression analysis — you can see exactly when a test started failing and correlate it with code changes.
Linking Test Cases to Work Items
Test cases integrate with the broader Azure Boards ecosystem through work item links:
- Tests / Tested By — Links a test case to the user story or requirement it validates. This is the link type used by requirement-based suites.
- References / Referenced By — Links a test case to related work items like design documents or specifications.
- Bug links — When a test fails and you file a bug from the runner, the bug is automatically linked to the test case.
These links create a traceability chain: Requirement → Test Case → Bug → Code Change. During audits or compliance reviews, this chain proves that requirements were validated and defects were tracked to resolution.
To add links manually, open a test case work item and use the Links tab. Select the link type and search for the target work item by ID or title.
Test Case Versioning
Test cases are work items, so they inherit the standard work item revision history. Every change to a test case — steps modified, fields updated, assignments changed — creates a new revision with a timestamp and author.
However, test results are tied to a specific version of the test case. If you modify test steps after a test run, the historical results still reflect the steps as they were at the time of execution. This is important for audit trails.
When you reset test results for a suite (useful at the start of a new sprint), the old results are preserved in the run history. The test points revert to "Not Run" but you can always go back and see previous outcomes.
For major test case revisions, consider using the State field. Set the test case to "Design" while you rework it, then move it back to "Ready" once the updates are reviewed. This signals to testers that the case is in flux and should not be executed.
Bulk Operations with the REST API Using Node.js
The Azure DevOps REST API gives you full control over test plans, suites, and test cases programmatically. This is essential for migration, bulk updates, and reporting. Here is how to perform common operations with Node.js.
Setting Up the API Client
var https = require("https");
var ORG = "your-organization";
var PROJECT = "your-project";
var PAT = process.env.AZURE_DEVOPS_PAT;
var API_VERSION = "7.1";
var BASE_URL = "https://dev.azure.com/" + ORG + "/" + PROJECT;
function makeRequest(method, path, body) {
return new Promise(function (resolve, reject) {
var url = new URL(path.indexOf("https://") === 0 ? path : BASE_URL + path);
url.searchParams.set("api-version", API_VERSION);
var auth = Buffer.from(":" + PAT).toString("base64");
var options = {
hostname: url.hostname,
path: url.pathname + url.search,
method: method,
headers: {
"Authorization": "Basic " + auth,
"Content-Type": "application/json"
}
};
var req = https.request(options, function (res) {
var data = "";
res.on("data", function (chunk) { data += chunk; });
res.on("end", function () {
if (res.statusCode >= 200 && res.statusCode < 300) {
resolve(data ? JSON.parse(data) : null);
} else {
reject(new Error("HTTP " + res.statusCode + ": " + data));
}
});
});
req.on("error", reject);
if (body) { req.write(JSON.stringify(body)); }
req.end();
});
}
Listing Test Plans
function listTestPlans() {
return makeRequest("GET", "/_apis/testplan/plans");
}
listTestPlans().then(function (result) {
result.value.forEach(function (plan) {
console.log("Plan #" + plan.id + ": " + plan.name);
});
});
Creating a Test Case Work Item
Test cases are work items of type "Test Case." You create them through the Work Items API, then add them to suites via the Test Plans API.
function createTestCase(title, steps, priority) {
var stepsXml = '<steps id="0" last="' + steps.length + '">';
steps.forEach(function (step, index) {
stepsXml += '<step id="' + (index + 1) + '" type="ValidateStep">';
stepsXml += "<parameterizedString>" + step.action + "</parameterizedString>";
stepsXml += "<parameterizedString>" + step.expected + "</parameterizedString>";
stepsXml += "</step>";
});
stepsXml += "</steps>";
var fields = [
{ op: "add", path: "/fields/System.Title", value: title },
{ op: "add", path: "/fields/Microsoft.VSTS.TCM.Steps", value: stepsXml },
{ op: "add", path: "/fields/Microsoft.VSTS.Common.Priority", value: priority || 2 }
];
return makeRequest("POST", "/_apis/wit/workitems/$Test Case", fields);
}
Adding Test Cases to a Suite
function addTestCasesToSuite(planId, suiteId, testCaseIds) {
var ids = testCaseIds.join(",");
var path = "/_apis/testplan/Plans/" + planId +
"/Suites/" + suiteId +
"/TestCase?testCaseIds=" + ids;
return makeRequest("POST", path);
}
Complete Working Example
The following Node.js script creates a test plan, adds suites, populates test cases with shared steps, and generates a coverage report. Save it as test-plan-manager.js and run it with your PAT set as an environment variable.
var https = require("https");
// Configuration
var ORG = process.env.AZURE_ORG || "your-organization";
var PROJECT = process.env.AZURE_PROJECT || "your-project";
var PAT = process.env.AZURE_DEVOPS_PAT;
var API_VERSION = "7.1";
if (!PAT) {
console.error("Error: Set AZURE_DEVOPS_PAT environment variable");
process.exit(1);
}
var BASE_URL = "https://dev.azure.com/" + ORG + "/" + PROJECT;
// ------------------------------------------------------------------
// HTTP helper
// ------------------------------------------------------------------
function makeRequest(method, path, body) {
return new Promise(function (resolve, reject) {
var fullPath = path.indexOf("https://") === 0 ? path : BASE_URL + path;
var url = new URL(fullPath);
if (url.searchParams.has("api-version") === false) {
url.searchParams.set("api-version", API_VERSION);
}
var auth = Buffer.from(":" + PAT).toString("base64");
var payload = body ? JSON.stringify(body) : null;
var options = {
hostname: url.hostname,
path: url.pathname + url.search,
method: method,
headers: {
"Authorization": "Basic " + auth,
"Content-Type": body instanceof Array
? "application/json-patch+json"
: "application/json"
}
};
var req = https.request(options, function (res) {
var data = "";
res.on("data", function (chunk) { data += chunk; });
res.on("end", function () {
if (res.statusCode >= 200 && res.statusCode < 300) {
resolve(data ? JSON.parse(data) : null);
} else {
reject(new Error(method + " " + path + " => HTTP " +
res.statusCode + ": " + data));
}
});
});
req.on("error", reject);
if (payload) { req.write(payload); }
req.end();
});
}
// ------------------------------------------------------------------
// Test Plan operations
// ------------------------------------------------------------------
function createTestPlan(name, areaPath, iteration) {
var body = {
name: name,
area: { name: areaPath },
iteration: iteration
};
return makeRequest("POST", "/_apis/testplan/plans", body);
}
function createStaticSuite(planId, parentSuiteId, name) {
var body = {
suiteType: "staticTestSuite",
name: name,
parentSuite: { id: parentSuiteId }
};
return makeRequest("POST",
"/_apis/testplan/Plans/" + planId + "/suites", body);
}
function createRequirementSuite(planId, parentSuiteId, requirementId) {
var body = {
suiteType: "requirementTestSuite",
requirementId: requirementId,
parentSuite: { id: parentSuiteId }
};
return makeRequest("POST",
"/_apis/testplan/Plans/" + planId + "/suites", body);
}
// ------------------------------------------------------------------
// Test Case operations
// ------------------------------------------------------------------
function buildStepsXml(steps) {
var xml = '<steps id="0" last="' + steps.length + '">';
steps.forEach(function (step, i) {
xml += '<step id="' + (i + 1) + '" type="ValidateStep">';
xml += "<parameterizedString>" +
escapeXml(step.action) + "</parameterizedString>";
xml += "<parameterizedString>" +
escapeXml(step.expected) + "</parameterizedString>";
xml += "</step>";
});
xml += "</steps>";
return xml;
}
function escapeXml(str) {
return str
.replace(/&/g, "&")
.replace(/</g, "<")
.replace(/>/g, ">")
.replace(/"/g, """);
}
function createTestCase(title, steps, priority, tags) {
var fields = [
{ op: "add", path: "/fields/System.Title", value: title },
{
op: "add",
path: "/fields/Microsoft.VSTS.TCM.Steps",
value: buildStepsXml(steps)
},
{
op: "add",
path: "/fields/Microsoft.VSTS.Common.Priority",
value: priority || 2
}
];
if (tags) {
fields.push({
op: "add",
path: "/fields/System.Tags",
value: tags.join("; ")
});
}
return makeRequest("POST", "/_apis/wit/workitems/$Test Case", fields);
}
function createSharedSteps(title, steps) {
var fields = [
{ op: "add", path: "/fields/System.Title", value: title },
{
op: "add",
path: "/fields/Microsoft.VSTS.TCM.Steps",
value: buildStepsXml(steps)
}
];
return makeRequest("POST",
"/_apis/wit/workitems/$Shared Steps", fields);
}
function addTestCaseToSuite(planId, suiteId, testCaseId) {
return makeRequest("POST",
"/_apis/testplan/Plans/" + planId +
"/Suites/" + suiteId +
"/TestCase?testCaseIds=" + testCaseId);
}
// ------------------------------------------------------------------
// Coverage report
// ------------------------------------------------------------------
function getTestPoints(planId, suiteId) {
return makeRequest("GET",
"/_apis/testplan/Plans/" + planId +
"/Suites/" + suiteId + "/TestPoint");
}
function generateCoverageReport(planId, suites) {
var report = {
planId: planId,
suites: [],
summary: { total: 0, passed: 0, failed: 0, notRun: 0, blocked: 0 }
};
var promises = suites.map(function (suite) {
return getTestPoints(planId, suite.id).then(function (result) {
var points = result.value || [];
var suiteReport = {
suiteId: suite.id,
suiteName: suite.name,
total: points.length,
passed: 0,
failed: 0,
notRun: 0,
blocked: 0
};
points.forEach(function (point) {
var outcome = point.results
? point.results.lastResultDetails
? point.results.lastResultDetails.outcome
: "notRun"
: "notRun";
switch (outcome) {
case "passed":
suiteReport.passed++;
report.summary.passed++;
break;
case "failed":
suiteReport.failed++;
report.summary.failed++;
break;
case "blocked":
suiteReport.blocked++;
report.summary.blocked++;
break;
default:
suiteReport.notRun++;
report.summary.notRun++;
}
report.summary.total++;
});
report.suites.push(suiteReport);
});
});
return Promise.all(promises).then(function () {
return report;
});
}
function printReport(report) {
console.log("\n========================================");
console.log(" TEST COVERAGE REPORT — Plan #" + report.planId);
console.log("========================================\n");
report.suites.forEach(function (suite) {
var passRate = suite.total > 0
? ((suite.passed / suite.total) * 100).toFixed(1)
: "0.0";
console.log("Suite: " + suite.suiteName + " (#" + suite.suiteId + ")");
console.log(" Total: " + suite.total +
" | Passed: " + suite.passed +
" | Failed: " + suite.failed +
" | Not Run: " + suite.notRun +
" | Blocked: " + suite.blocked);
console.log(" Pass Rate: " + passRate + "%\n");
});
var totalRate = report.summary.total > 0
? ((report.summary.passed / report.summary.total) * 100).toFixed(1)
: "0.0";
console.log("----------------------------------------");
console.log("OVERALL: " + report.summary.total + " test points");
console.log(" Passed: " + report.summary.passed +
" | Failed: " + report.summary.failed +
" | Not Run: " + report.summary.notRun +
" | Blocked: " + report.summary.blocked);
console.log(" Overall Pass Rate: " + totalRate + "%");
console.log("========================================\n");
}
// ------------------------------------------------------------------
// Main workflow
// ------------------------------------------------------------------
function main() {
var planId;
var rootSuiteId;
var authSuiteId;
var cartSuiteId;
var sharedStepId;
var createdSuites = [];
console.log("Creating test plan...");
createTestPlan("v3.0 Release Validation", PROJECT, PROJECT)
.then(function (plan) {
planId = plan.id;
rootSuiteId = plan.rootSuite.id;
console.log("Created plan #" + planId);
// Create two static suites
return createStaticSuite(planId, rootSuiteId, "Authentication Tests");
})
.then(function (suite) {
authSuiteId = suite.id;
createdSuites.push({ id: suite.id, name: "Authentication Tests" });
console.log("Created suite: Authentication Tests (#" + suite.id + ")");
return createStaticSuite(planId, rootSuiteId, "Shopping Cart Tests");
})
.then(function (suite) {
cartSuiteId = suite.id;
createdSuites.push({ id: suite.id, name: "Shopping Cart Tests" });
console.log("Created suite: Shopping Cart Tests (#" + suite.id + ")");
// Create a shared step for login
console.log("Creating shared steps...");
return createSharedSteps("Login as Standard User", [
{
action: "Navigate to /login",
expected: "Login page loads with email and password fields"
},
{
action: "Enter [email protected] and Password123!",
expected: "Credentials are entered"
},
{
action: "Click Sign In",
expected: "User is redirected to the home page"
}
]);
})
.then(function (sharedStep) {
sharedStepId = sharedStep.id;
console.log("Created shared steps #" + sharedStepId);
// Create authentication test cases
console.log("Creating test cases...");
return createTestCase(
"Login with valid credentials",
[
{
action: "Navigate to /login",
expected: "Login page loads"
},
{
action: "Enter valid email and password",
expected: "Fields accept input"
},
{
action: "Click Sign In",
expected: "User sees the dashboard with welcome message"
}
],
1,
["smoke", "authentication"]
);
})
.then(function (tc) {
console.log(" Created: " + tc.fields["System.Title"] +
" (#" + tc.id + ")");
return addTestCaseToSuite(planId, authSuiteId, tc.id).then(function () {
return createTestCase(
"Login with invalid password",
[
{
action: "Navigate to /login",
expected: "Login page loads"
},
{
action: "Enter valid email and wrong password",
expected: "Fields accept input"
},
{
action: "Click Sign In",
expected: "Error message: Invalid email or password"
},
{
action: "Verify the user remains on the login page",
expected: "Login form is still visible, no redirect occurred"
}
],
1,
["smoke", "authentication", "negative"]
);
});
})
.then(function (tc) {
console.log(" Created: " + tc.fields["System.Title"] +
" (#" + tc.id + ")");
return addTestCaseToSuite(planId, authSuiteId, tc.id).then(function () {
return createTestCase(
"Account lockout after 5 failed attempts",
[
{
action: "Navigate to /login",
expected: "Login page loads"
},
{
action: "Enter valid email with wrong password 5 times",
expected: "Error message on each attempt"
},
{
action: "Attempt a 6th login",
expected: "Account locked message appears with unlock instructions"
}
],
2,
["authentication", "security"]
);
});
})
.then(function (tc) {
console.log(" Created: " + tc.fields["System.Title"] +
" (#" + tc.id + ")");
return addTestCaseToSuite(planId, authSuiteId, tc.id).then(function () {
// Create shopping cart test cases
return createTestCase(
"Add item to cart",
[
{
action: "Log in as a standard user",
expected: "User is on the home page"
},
{
action: "Navigate to /products and select a product",
expected: "Product detail page loads with price and Add to Cart button"
},
{
action: "Click Add to Cart",
expected: "Cart icon shows 1 item. Toast notification confirms addition"
},
{
action: "Navigate to /cart",
expected: "Cart page shows the product with correct name, quantity 1, and price"
}
],
1,
["smoke", "cart"]
);
});
})
.then(function (tc) {
console.log(" Created: " + tc.fields["System.Title"] +
" (#" + tc.id + ")");
return addTestCaseToSuite(planId, cartSuiteId, tc.id).then(function () {
return createTestCase(
"Remove item from cart",
[
{
action: "Log in and add a product to the cart",
expected: "Cart has 1 item"
},
{
action: "Navigate to /cart",
expected: "Cart page shows the item"
},
{
action: "Click the Remove button next to the item",
expected: "Item is removed. Cart shows empty state message"
}
],
2,
["cart"]
);
});
})
.then(function (tc) {
console.log(" Created: " + tc.fields["System.Title"] +
" (#" + tc.id + ")");
return addTestCaseToSuite(planId, cartSuiteId, tc.id);
})
.then(function () {
// Generate coverage report
console.log("\nGenerating coverage report...");
return generateCoverageReport(planId, createdSuites);
})
.then(function (report) {
printReport(report);
console.log("Done. Test plan #" + planId +
" is ready for execution.");
})
.catch(function (err) {
console.error("Error:", err.message);
process.exit(1);
});
}
main();
Run the script:
export AZURE_DEVOPS_PAT="your-pat-here"
export AZURE_ORG="your-organization"
export AZURE_PROJECT="your-project"
node test-plan-manager.js
The script produces output like this:
Creating test plan...
Created plan #142
Created suite: Authentication Tests (#143)
Created suite: Shopping Cart Tests (#144)
Creating shared steps...
Created shared steps #501
Creating test cases...
Created: Login with valid credentials (#502)
Created: Login with invalid password (#503)
Created: Account lockout after 5 failed attempts (#504)
Created: Add item to cart (#505)
Created: Remove item from cart (#506)
Generating coverage report...
========================================
TEST COVERAGE REPORT — Plan #142
========================================
Suite: Authentication Tests (#143)
Total: 3 | Passed: 0 | Failed: 0 | Not Run: 3 | Blocked: 0
Pass Rate: 0.0%
Suite: Shopping Cart Tests (#144)
Total: 2 | Passed: 0 | Failed: 0 | Not Run: 2 | Blocked: 0
Pass Rate: 0.0%
----------------------------------------
OVERALL: 5 test points
Passed: 0 | Failed: 0 | Not Run: 5 | Blocked: 0
Overall Pass Rate: 0.0%
========================================
Common Issues and Troubleshooting
1. "You do not have the appropriate permissions" when creating test plans
This almost always means your access level is Basic rather than Basic + Test Plans. Test Plans authoring requires the paid access level. Go to Organization Settings > Users and verify the access level. Alternatively, users with Visual Studio Enterprise subscriptions automatically get Test Plans access.
2. Test cases do not appear in requirement-based suites
Requirement-based suites only show test cases linked with the "Tests/Tested By" link type. If you linked the test case with a different link type (e.g., Related), it will not appear. Open the test case, go to the Links tab, remove the incorrect link, and add a new one with type "Tests."
3. Shared steps changes not reflected in test runner
The test runner caches shared steps when a run starts. If you updated shared steps while a test run was in progress, the tester needs to close and reopen the runner to pick up the changes. For in-flight runs, communicate shared step changes to your testers and have them restart.
4. REST API returns 404 for test plan endpoints
The test plan APIs use a different base URL pattern depending on the API version. For API version 7.1, use /_apis/testplan/plans (not /_apis/test/plans). The older _apis/test/ endpoints are for the legacy Test Management API and may not support newer features. Also ensure your PAT has the Test Management scope enabled.
5. Parameterized test data not persisting between sessions
Parameter tables are stored as part of the test case work item in the Microsoft.VSTS.TCM.LocalDataSource field. If you are creating test cases via the API and parameters are not saving, make sure you are populating this field with the correct XML format. The easiest approach is to create a parameterized test case in the UI first, then query the API to see the XML structure, and replicate that in your code.
6. Test points showing "Not Applicable" outcome unexpectedly
This happens when a configuration is removed from a suite after test points were already created. The orphaned test points get marked as Not Applicable. Clean them up by resetting the test points for the affected suite, which will regenerate them based on the current configuration assignments.
Best Practices
Name test plans after releases or sprints, not dates. "v3.0 Regression" is more meaningful than "Feb 2026 Tests" six months from now when you are reviewing historical data.
Use requirement-based suites as your primary organization method. They give you automatic traceability from user stories to test cases. When a stakeholder asks about coverage for a feature, you have an immediate answer.
Keep test cases atomic and independent. Each test case should test one scenario and should not depend on the outcome of another test case. If test B requires the state created by test A, make test B set up its own preconditions.
Invest heavily in shared steps. Every time you copy-paste steps between test cases, you create a maintenance burden. Extract common sequences into shared steps. Login flows, navigation sequences, and data setup are prime candidates.
Tag test cases for flexible suite creation. Tags like "smoke," "regression," "P1," "api," and "ui" let you create query-based suites that slice your test cases in any dimension. This is cheaper than maintaining parallel suite hierarchies.
Set Automation Status on every test case. Even if you have no automated tests today, marking cases as "Not Automated" or "Planned" gives you a clear backlog and helps you measure progress toward your automation goals.
Review and prune test cases each sprint. Test suites grow over time and accumulate obsolete cases. Dedicate time each sprint to close test cases for deprecated features and update steps that reference changed UI or behavior.
Use the REST API for bulk operations. Creating 50 test cases through the UI is painful. A script that reads test case definitions from a spreadsheet or JSON file and creates them via the API saves hours and reduces errors.
Link bugs to test cases when filing from the test runner. This closes the feedback loop. When a developer fixes a bug, they can see which test case exposed it and re-run that specific case to verify the fix.
Export coverage reports before release sign-off. Generate a snapshot of test execution results at the moment of release. Store it as a wiki page or artifact. This provides an audit trail that proves the release was tested to your team's standards.