Test Plans

Manual Testing Workflows in Azure DevOps

A comprehensive guide to manual testing workflows in Azure DevOps, covering test case authoring, shared steps, configurations, test runners, exploratory testing, session-based testing, bug filing from test runs, and integrating manual testing into sprint workflows.

Manual Testing Workflows in Azure DevOps

Overview

Automated tests catch regressions and enforce known requirements. Manual testing catches everything else -- usability issues, visual glitches, workflow problems, edge cases that nobody thought to automate, and the general question of "does this actually work the way a human expects?" Azure DevOps Test Plans provides a structured framework for manual testing that goes well beyond spreadsheets and ad-hoc clicking. It includes a web-based test runner, exploratory testing with automatic screenshot capture, session-based test management, shared steps for reusable procedures, configuration matrices for cross-browser and cross-device testing, and direct bug filing with full reproduction context attached.

I have seen teams treat manual testing as something they will "get to later" and ship features with zero human verification. I have also seen teams drown in unstructured manual testing where testers click around without a plan and file bugs with no reproduction steps. Azure DevOps Test Plans sits in the middle -- structured enough to be repeatable and trackable, flexible enough for exploratory work. This article covers the full manual testing workflow from test case creation through execution, bug filing, and sprint-level reporting.

Prerequisites

  • An Azure DevOps organization with an active project
  • Azure Test Plans license (Basic + Test Plans, or Visual Studio Enterprise subscription)
  • At least one Test Plan created in your project
  • Access to the Test Plans hub in Azure DevOps
  • The Test & Feedback browser extension installed (for exploratory testing)
  • Familiarity with Azure DevOps work items and boards

Writing Effective Test Cases

A test case in Azure DevOps is a work item type with structured steps. Each step has an Action and an Expected Result. This sounds obvious, but the difference between useful test cases and useless ones comes down to how you write those steps.

Test Case Structure

Every test case should follow a predictable pattern: setup, action, verification. The setup steps establish the precondition. The action steps perform the thing being tested. The verification steps confirm the expected outcome.

Here is what a well-structured test case looks like:

Test Case: Verify user login with valid credentials
Priority: 2
Area Path: MyProject\Authentication

Step 1:
  Action: Navigate to https://app.example.com/login
  Expected: Login page displays with email and password fields

Step 2:
  Action: Enter "[email protected]" in the email field
  Expected: Email field accepts input without validation errors

Step 3:
  Action: Enter "ValidP@ss123" in the password field
  Expected: Password field shows masked characters

Step 4:
  Action: Click the "Sign In" button
  Expected: User is redirected to the dashboard. Welcome message displays "Hello, Test User"

Step 5:
  Action: Verify the navigation bar shows the user avatar and name
  Expected: Avatar icon and "Test User" text appear in the top-right corner

Parameterized Test Cases

Azure DevOps supports parameterized test cases where you define variables in steps and provide different data sets. This is critical for testing the same workflow with different inputs without duplicating test cases.

In your step actions and expected results, use the @parameter syntax:

Step 1:
  Action: Navigate to the login page
  Expected: Login page loads

Step 2:
  Action: Enter @username in the email field
  Expected: Field accepts input

Step 3:
  Action: Enter @password in the password field
  Expected: Field accepts input

Step 4:
  Action: Click Sign In
  Expected: @expectedResult

Then in the Parameters section of the test case work item, define the data table:

@username @password @expectedResult
[email protected] ValidP@ss123 Dashboard displays
[email protected] ValidP@ss123 Error: Account not found
[email protected] wrongpassword Error: Invalid password
ValidP@ss123 Error: Email is required
[email protected] Error: Password is required

Each row becomes a separate iteration when the test is executed. The test runner cycles through all parameter combinations, and each iteration gets its own pass/fail result.

Shared Steps

Shared steps solve the duplication problem. If you have 30 test cases that all start with "log in as an admin user," you create those login steps once as a Shared Steps work item and insert them into each test case.

To create shared steps:

  1. Open any test case that contains the steps you want to share
  2. Select the steps you want to extract
  3. Click "Create shared steps" from the toolbar
  4. Name the shared steps work item (e.g., "Login as Admin User")

Now those steps appear as a single collapsible block in any test case that references them. When the login flow changes, you update the shared steps once and every test case that uses them is automatically current.

I typically create shared steps for these common patterns:

  • Login flows -- "Login as Admin," "Login as Standard User," "Login as Read-Only User"
  • Navigation sequences -- "Navigate to Project Settings," "Open the Build Pipeline"
  • Data setup -- "Create a test work item," "Upload a sample file"
  • Cleanup procedures -- "Delete test data," "Reset user permissions"

Test Case Work Item Fields

Beyond the steps, several fields on the test case work item matter for organization:

  • Priority: 1 (critical) through 4 (nice to have). Use this for filtering which tests to run in time-constrained sprints
  • Automation Status: Not Automated, Planned, Automated. Track your automation backlog
  • Area Path: Maps test cases to feature areas. Essential for area-based test assignment
  • Iteration Path: Links test cases to sprints for planning
  • State: Design, Ready, Closed. Use "Ready" to indicate the test case has been reviewed and is executable

Organizing Test Suites

Test suites in Azure DevOps come in three types, and using the right type for the right purpose is the difference between a maintainable test plan and chaos.

Static Suites

Static suites are manual collections. You explicitly add test cases to them. Use these for:

  • Release validation suites -- the specific tests that must pass before a release ships
  • Smoke test suites -- a curated set of critical path tests
  • Demo preparation suites -- tests that exercise features being demonstrated

Requirement-Based Suites

These suites automatically populate from user stories or requirements. When you create a requirement-based suite linked to a user story, every test case linked to that story appears in the suite.

This is powerful for traceability. Product owners can see at a glance which stories have test cases, which have been executed, and which passed. When a story moves to "Ready for Testing," the tester opens the requirement-based suite and all relevant test cases are already there.

Query-Based Suites

Query-based suites populate dynamically from work item queries. Create a query like "all test cases where Priority = 1 AND Area Path under Authentication" and the suite stays current as test cases are added or modified.

I use query-based suites for:

  • Regression suites -- all Priority 1 and 2 test cases across the project
  • Area-focused suites -- all test cases for a specific feature area
  • Unexecuted test suites -- test cases that have never been run (useful for identifying coverage gaps)

Suite Hierarchy

Organize suites in a hierarchy that mirrors your testing strategy:

Test Plan: Sprint 47 Testing
├── Smoke Tests (Static)
├── New Features
│   ├── User Authentication (Requirement-based → User Story 4521)
│   ├── Dashboard Redesign (Requirement-based → User Story 4535)
│   └── Export Feature (Requirement-based → User Story 4540)
├── Regression
│   ├── Critical Path (Query: Priority 1)
│   └── Full Regression (Query: Priority 1-2)
└── Exploratory
    └── Sprint 47 Exploratory Sessions (Static)

Configurations and Cross-Browser Testing

Configurations let you run the same test case across multiple environments without duplicating anything. A configuration is a named set of configuration variables -- browser type, operating system, screen resolution, device type, or any custom dimension you need.

Setting Up Configurations

In the Test Plans settings, define configuration variables:

Configuration Variable: Browser
  Values: Chrome, Firefox, Edge, Safari

Configuration Variable: Operating System
  Values: Windows 11, macOS Sonoma, Ubuntu 22.04

Configuration Variable: Screen Resolution
  Values: 1920x1080, 1366x768, 375x667 (mobile)

Then create named configurations combining these variables:

Configuration: Desktop Chrome
  Browser = Chrome
  Operating System = Windows 11
  Screen Resolution = 1920x1080

Configuration: Desktop Firefox
  Browser = Firefox
  Operating System = Windows 11
  Screen Resolution = 1920x1080

Configuration: Mobile Safari
  Browser = Safari
  Operating System = macOS Sonoma
  Screen Resolution = 375x667

Assigning Configurations to Suites

Assign configurations at the suite level. When you assign three configurations to a suite containing 10 test cases, you get 30 test points -- each test case must be executed once per configuration. The test runner tracks results per configuration, so you know exactly which browser-OS combination fails.

This is where Azure DevOps Test Plans significantly outperforms spreadsheets. A 50-test-case suite with 4 configurations gives you 200 trackable test points with per-configuration pass rates, trend charts, and gap analysis -- all managed automatically.

Using the Web Test Runner

The web-based test runner is the primary execution interface for manual tests. It opens in a side panel or separate window and walks the tester through each step.

Running a Test

  1. Open the Test Plans hub and navigate to your test suite
  2. Select one or more test cases from the list
  3. Click "Run" or "Run with options" (to select a specific configuration or build)
  4. The test runner opens showing Step 1

For each step, the tester:

  • Reads the Action text and performs it
  • Compares the actual result against the Expected Result
  • Marks the step as Pass, Fail, or Not Applicable
  • Optionally adds a comment or screenshot

If any step fails, the test runner prompts you to create a bug. The bug is automatically populated with:

  • Steps to reproduce (the test case steps, with the failing step highlighted)
  • System information (browser, OS, screen resolution)
  • Any screenshots or screen recordings captured during the run
  • A link back to the test case and test run

Collecting Evidence

During test execution, you can attach evidence to any step:

  • Screenshots: Click the camera icon to capture the current browser tab
  • Screen recordings: Start/stop recording to capture video of complex interactions
  • Comments: Add text notes about unexpected behavior or observations
  • Attachments: Upload files, logs, or other artifacts

This evidence is stored with the test result and included in any bugs filed from the run. When a developer opens the bug, they get the exact visual context of what the tester saw.

Bulk Execution

For smoke tests or quick regression passes, you can select multiple test cases and run them in sequence. The runner advances to the next test case automatically when you pass or fail the current one. You can also "Pass All" remaining steps if you verify the entire test case passes in one go, which saves time on simple verification tests.

Exploratory Testing

Exploratory testing is unscripted testing where the tester investigates the application based on experience, intuition, and risk areas. Azure DevOps supports this through the Test & Feedback browser extension and session-based test management.

The Test & Feedback Extension

Install the Test & Feedback extension for Chrome or Edge. Once connected to your Azure DevOps project, it provides:

  • Screen capture: Annotated screenshots with drawing tools
  • Screen recording: Video capture of testing sessions
  • Notes: Free-form text notes timestamped during the session
  • Bug creation: One-click bug filing with all captured evidence attached
  • Task creation: Create tasks for follow-up items discovered during exploration
  • Timer: Track how long you spend on each area

Starting an Exploratory Session

  1. Open the Test & Feedback extension
  2. Click "Start Session"
  3. Select the area path or work item you are exploring
  4. Begin testing -- the extension runs in the background

As you test, capture screenshots and notes. The extension automatically records browser actions and timestamps. When you find an issue:

  1. Click "Create Bug" in the extension
  2. The bug is pre-populated with your captured screenshots, notes, and browser information
  3. Add reproduction steps and severity
  4. Submit -- the bug is linked to your exploratory session

Session-Based Test Management

Session-based test management (SBTM) adds structure to exploratory testing without making it scripted. The concept is simple: define test charters that describe what area to explore and what risks to investigate, then run timed sessions against those charters.

In Azure DevOps, you can model this as:

Test Plan: Sprint 47 Exploratory
└── Exploratory Suite (Static)
    ├── Charter: Explore payment flow edge cases (30 min)
    ├── Charter: Investigate dashboard performance with 1000+ items (45 min)
    ├── Charter: Test accessibility of new modal dialogs (30 min)
    └── Charter: Verify email notification templates (20 min)

Each charter is a test case with a description of the area, risk hypothesis, and time box. Testers claim charters, run sessions using the Test & Feedback extension, and file bugs linked to the charter. After the session, the tester writes a brief debrief note in the test result:

  • What was tested
  • What was not tested (ran out of time, blocked, etc.)
  • Bugs found
  • Risks identified
  • Recommended follow-up

Filing Bugs from Test Runs

Bug filing from test execution is one of the strongest features of Azure DevOps Test Plans. When a tester files a bug from the test runner, the system captures context that would take minutes to assemble manually.

Automatic Context

A bug filed from a test run includes:

  • Repro Steps: The test case steps with pass/fail status per step, formatted as HTML. The failing step is highlighted
  • System Info: Browser name and version, OS, screen resolution
  • Build: The build number being tested (if specified when starting the run)
  • Test Case Link: Direct link to the test case for re-execution after the fix
  • Attachments: All screenshots, recordings, and files captured during the step

Bug-to-Test Traceability

When a bug is filed from a test case, Azure DevOps creates a "Tested By / Tests" link between them. This link enables several workflows:

  1. Verification: When the developer fixes the bug and moves it to "Resolved," the tester can re-run the original test case to verify the fix
  2. Regression tracking: If the same test case fails again in a future sprint, the historical bug link shows this is a recurring issue
  3. Coverage analysis: You can query which bugs have associated test cases and which do not

Bulk Bug Updates

For test runs that reveal the same issue across multiple configurations, avoid filing duplicate bugs. Instead:

  1. File the bug from the first failing test point
  2. For subsequent failures caused by the same issue, mark the step as Failed and add a comment referencing the existing bug number
  3. Link additional test cases to the existing bug using "Tested By" links

Integrating Manual Testing into Sprint Workflows

Manual testing is most effective when it is planned into the sprint alongside development work, not treated as an afterthought in the last two days.

Sprint Test Planning

At the beginning of each sprint, create a new test plan or add a new suite to an existing plan:

Test Plan: Release 3.2 Testing
├── Sprint 47
│   ├── New Feature Tests (Requirement-based)
│   ├── Bug Verification (Query: Bugs resolved in Sprint 47)
│   └── Regression (Query: Priority 1-2, smoke tests)
├── Sprint 48
│   └── ...

During sprint planning, the test lead estimates test effort alongside development effort. Each user story should have "test cases written" and "test cases executed" as part of its definition of done.

Tester Assignment

Assign testers to test suites or individual test points. The "Assign testers to test points" dialog lets you distribute work across team members. You can assign by:

  • Configuration: One tester handles all Chrome tests, another handles Firefox
  • Area: One tester owns Authentication, another owns Dashboard
  • Priority: Senior testers handle Priority 1, junior testers handle Priority 2-3

Tracking Progress

The Test Plans hub provides several views for tracking manual test progress:

  • Charts: Test outcome pie charts (Passed, Failed, Blocked, Not Run) at the suite or plan level
  • Progress bar: Visual indicator of completion percentage per suite
  • By-tester view: See how many test points each tester has completed
  • By-configuration view: See pass rates per configuration

For sprint reviews, the Test Plans "Progress Report" widget shows:

Suite: Sprint 47 New Features
Total test points: 120
  Passed: 95 (79.2%)
  Failed: 8 (6.7%)
  Blocked: 5 (4.2%)
  Not Run: 12 (10.0%)

Complete Working Example

This example demonstrates a Node.js script that creates a structured manual test plan using the Azure DevOps REST API, including test suites, test cases with shared steps, and configuration assignments.

var https = require("https");
var url = require("url");

var ORG = "my-organization";
var PROJECT = "my-project";
var PAT = process.env.AZURE_DEVOPS_PAT;
var API_VERSION = "7.1";

var BASE_URL = "https://dev.azure.com/" + ORG + "/" + PROJECT;
var AUTH = "Basic " + Buffer.from(":" + PAT).toString("base64");

function makeRequest(method, path, body) {
  return new Promise(function (resolve, reject) {
    var fullUrl = path.indexOf("https://") === 0 ? path : BASE_URL + path;
    var parsed = url.parse(fullUrl);
    var options = {
      hostname: parsed.hostname,
      path: parsed.path,
      method: method,
      headers: {
        "Content-Type": "application/json",
        Authorization: AUTH,
      },
    };

    var req = https.request(options, function (res) {
      var data = "";
      res.on("data", function (chunk) {
        data += chunk;
      });
      res.on("end", function () {
        if (res.statusCode >= 200 && res.statusCode < 300) {
          resolve(data ? JSON.parse(data) : null);
        } else {
          reject(
            new Error(
              method +
                " " +
                path +
                " failed: " +
                res.statusCode +
                " " +
                data
            )
          );
        }
      });
    });

    req.on("error", reject);
    if (body) {
      req.write(JSON.stringify(body));
    }
    req.end();
  });
}

function createWorkItem(type, fields) {
  var patchDoc = Object.keys(fields).map(function (key) {
    return {
      op: "add",
      path: "/fields/" + key,
      value: fields[key],
    };
  });

  return makeRequest(
    "POST",
    "/_apis/wit/workitems/$" +
      encodeURIComponent(type) +
      "?api-version=" +
      API_VERSION,
    patchDoc
  );
}

function createTestPlan(name, areaPath, iteration) {
  return makeRequest(
    "POST",
    "/_apis/testplan/plans?api-version=" + API_VERSION,
    {
      name: name,
      area: { name: areaPath },
      iteration: iteration,
    }
  );
}

function createTestSuite(planId, parentSuiteId, name, suiteType, queryString) {
  var body = {
    name: name,
    suiteType: suiteType || "staticTestSuite",
  };

  if (suiteType === "dynamicTestSuite" && queryString) {
    body.queryString = queryString;
  }

  return makeRequest(
    "POST",
    "/_apis/testplan/Plans/" +
      planId +
      "/suites/" +
      parentSuiteId +
      "/children?api-version=" +
      API_VERSION,
    body
  );
}

function createSharedSteps(title, steps) {
  var stepsXml = '<steps id="0" last="' + steps.length + '">';
  steps.forEach(function (step, index) {
    stepsXml +=
      '<step id="' +
      (index + 1) +
      '" type="ActionStep">' +
      "<parameterizedString>" +
      escapeXml(step.action) +
      "</parameterizedString>" +
      "<parameterizedString>" +
      escapeXml(step.expected) +
      "</parameterizedString>" +
      "</step>";
  });
  stepsXml += "</steps>";

  return createWorkItem("Shared Steps", {
    "System.Title": title,
    "Microsoft.VSTS.TCM.Steps": stepsXml,
  });
}

function createTestCase(title, steps, sharedStepRefs, parameters) {
  var stepId = 0;
  var stepsXml = '<steps id="0" last="' + (steps.length + (sharedStepRefs ? sharedStepRefs.length : 0)) + '">';

  if (sharedStepRefs) {
    sharedStepRefs.forEach(function (ref) {
      stepId++;
      stepsXml +=
        '<compref id="' + stepId + '" ref="' + ref + '" />';
    });
  }

  steps.forEach(function (step) {
    stepId++;
    stepsXml +=
      '<step id="' +
      stepId +
      '" type="' +
      (step.type || "ActionStep") +
      '">' +
      "<parameterizedString>" +
      escapeXml(step.action) +
      "</parameterizedString>" +
      "<parameterizedString>" +
      escapeXml(step.expected || "") +
      "</parameterizedString>" +
      "</step>";
  });
  stepsXml += "</steps>";

  var fields = {
    "System.Title": title,
    "Microsoft.VSTS.TCM.Steps": stepsXml,
  };

  if (parameters) {
    fields["Microsoft.VSTS.TCM.LocalDataSource"] = buildParameterXml(parameters);
  }

  return createTestCase("Test Case", fields);
}

function buildParameterXml(parameters) {
  var columns = Object.keys(parameters[0]);
  var xml = '<?xml version="1.0" encoding="utf-8"?>';
  xml += '<NewDataSet>';

  parameters.forEach(function (row, index) {
    xml += "<Table1>";
    columns.forEach(function (col) {
      xml += "<" + col + ">" + escapeXml(row[col]) + "</" + col + ">";
    });
    xml += "</Table1>";
  });

  xml += "</NewDataSet>";
  return xml;
}

function addTestCaseToSuite(planId, suiteId, testCaseIds) {
  var body = testCaseIds.map(function (id) {
    return { workItem: { id: id } };
  });

  return makeRequest(
    "POST",
    "/_apis/testplan/Plans/" +
      planId +
      "/Suites/" +
      suiteId +
      "/TestPoint?api-version=" +
      API_VERSION,
    body
  );
}

function createConfiguration(name, variables) {
  return makeRequest(
    "POST",
    "/_apis/testplan/configurations?api-version=" + API_VERSION,
    {
      name: name,
      values: variables,
    }
  );
}

function assignConfigurationsToSuite(planId, suiteId, configIds) {
  var body = configIds.map(function (id) {
    return { id: id };
  });

  return makeRequest(
    "PATCH",
    "/_apis/testplan/Plans/" +
      planId +
      "/Suites/" +
      suiteId +
      "?api-version=" +
      API_VERSION,
    {
      defaultConfigurations: body,
    }
  );
}

function escapeXml(str) {
  return String(str)
    .replace(/&/g, "&amp;")
    .replace(/</g, "&lt;")
    .replace(/>/g, "&gt;")
    .replace(/"/g, "&quot;")
    .replace(/'/g, "&apos;");
}

function setupSprintTestPlan() {
  var planId;
  var rootSuiteId;
  var sharedStepId;
  var smokeTestSuiteId;
  var featureSuiteId;
  var regressionSuiteId;
  var configIds = [];

  console.log("Creating test plan for Sprint 47...");

  createTestPlan("Sprint 47 Testing", "MyProject", "MyProject\\Sprint 47")
    .then(function (plan) {
      planId = plan.id;
      rootSuiteId = plan.rootSuite.id;
      console.log("Created test plan: " + planId);
      console.log("Root suite ID: " + rootSuiteId);

      // Create configurations
      return Promise.all([
        createConfiguration("Desktop Chrome", [
          { name: "Browser", value: "Chrome" },
          { name: "OS", value: "Windows 11" },
        ]),
        createConfiguration("Desktop Firefox", [
          { name: "Browser", value: "Firefox" },
          { name: "OS", value: "Windows 11" },
        ]),
        createConfiguration("Mobile Safari", [
          { name: "Browser", value: "Safari" },
          { name: "OS", value: "iOS 17" },
        ]),
      ]);
    })
    .then(function (configs) {
      configIds = configs.map(function (c) {
        return c.id;
      });
      console.log(
        "Created configurations: " +
          configs.map(function (c) {
            return c.name;
          }).join(", ")
      );

      // Create shared steps for login
      return createSharedSteps("Login as Standard User", [
        {
          action: "Navigate to https://app.example.com/login",
          expected: "Login page displays",
        },
        {
          action: 'Enter "[email protected]" in email field',
          expected: "Email accepted",
        },
        {
          action: 'Enter "ValidP@ss123" in password field',
          expected: "Password masked",
        },
        {
          action: 'Click "Sign In"',
          expected: "Dashboard loads with welcome message",
        },
      ]);
    })
    .then(function (sharedStep) {
      sharedStepId = sharedStep.id;
      console.log("Created shared steps: " + sharedStepId);

      // Create test suites
      return createTestSuite(planId, rootSuiteId, "Smoke Tests", "staticTestSuite")
        .then(function (suite) {
          smokeTestSuiteId = suite.id;
          console.log("Created Smoke Tests suite: " + smokeTestSuiteId);
          return createTestSuite(
            planId,
            rootSuiteId,
            "New Features",
            "staticTestSuite"
          );
        })
        .then(function (suite) {
          featureSuiteId = suite.id;
          console.log("Created New Features suite: " + featureSuiteId);
          return createTestSuite(
            planId,
            rootSuiteId,
            "Regression - Priority 1",
            "dynamicTestSuite",
            "SELECT [System.Id] FROM WorkItems WHERE [System.WorkItemType] = 'Test Case' AND [Microsoft.VSTS.Common.Priority] = 1"
          );
        })
        .then(function (suite) {
          regressionSuiteId = suite.id;
          console.log("Created Regression suite: " + regressionSuiteId);
        });
    })
    .then(function () {
      // Assign configurations to the smoke test suite
      return assignConfigurationsToSuite(planId, smokeTestSuiteId, configIds);
    })
    .then(function () {
      console.log("Assigned configurations to Smoke Tests suite");

      // Create test cases for the smoke suite
      var testCases = [
        {
          title: "Verify homepage loads correctly",
          steps: [
            {
              action: "Navigate to https://app.example.com",
              expected:
                "Homepage loads within 3 seconds. Hero banner, navigation, and footer are visible",
            },
            {
              action: "Verify the main navigation links are functional",
              expected:
                "Dashboard, Projects, Settings links are present and not broken",
            },
          ],
        },
        {
          title: "Verify user can create a new project",
          steps: [
            {
              action: 'Click "New Project" button on dashboard',
              expected: "New Project dialog appears",
            },
            {
              action: 'Enter "Test Project" as project name',
              expected: "Name field accepts input",
            },
            {
              action: 'Click "Create"',
              expected:
                "Project is created. User is redirected to the project page. Success notification appears",
            },
          ],
        },
        {
          title: "Verify login with valid credentials",
          steps: [
            {
              action: "Verify the dashboard shows recent activity",
              expected: "Recent activity section displays at least one entry",
            },
          ],
        },
      ];

      // Create the test cases sequentially
      var createdIds = [];
      var chain = Promise.resolve();

      testCases.forEach(function (tc, index) {
        chain = chain.then(function () {
          var sharedRefs = index === 2 ? [sharedStepId] : null;
          return createWorkItem("Test Case", {
            "System.Title": tc.title,
            "Microsoft.VSTS.Common.Priority": 1,
            "Microsoft.VSTS.TCM.Steps": buildStepsXml(tc.steps, sharedRefs),
          }).then(function (item) {
            createdIds.push(item.id);
            console.log(
              "Created test case: " + item.id + " - " + tc.title
            );
            return item;
          });
        });
      });

      return chain.then(function () {
        return createdIds;
      });
    })
    .then(function (testCaseIds) {
      // Add test cases to smoke suite
      return addTestCaseToSuite(planId, smokeTestSuiteId, testCaseIds);
    })
    .then(function () {
      console.log("Added test cases to Smoke Tests suite");
      console.log("");
      console.log("=== Sprint 47 Test Plan Setup Complete ===");
      console.log("Plan ID: " + planId);
      console.log("Configurations: " + configIds.length);
      console.log(
        "Test points: 3 test cases x " +
          configIds.length +
          " configs = " +
          3 * configIds.length
      );
      console.log("");
      console.log("Next steps:");
      console.log("1. Add requirement-based suites for sprint user stories");
      console.log("2. Assign testers to test points");
      console.log("3. Create exploratory test charters");
    })
    .catch(function (err) {
      console.error("Setup failed: " + err.message);
      process.exit(1);
    });
}

function buildStepsXml(steps, sharedRefs) {
  var stepId = 0;
  var totalSteps = steps.length + (sharedRefs ? sharedRefs.length : 0);
  var xml = '<steps id="0" last="' + totalSteps + '">';

  if (sharedRefs) {
    sharedRefs.forEach(function (ref) {
      stepId++;
      xml += '<compref id="' + stepId + '" ref="' + ref + '" />';
    });
  }

  steps.forEach(function (step) {
    stepId++;
    xml +=
      '<step id="' + stepId + '" type="ActionStep">' +
      "<parameterizedString>" +
      escapeXml(step.action) +
      "</parameterizedString>" +
      "<parameterizedString>" +
      escapeXml(step.expected || "") +
      "</parameterizedString>" +
      "</step>";
  });

  xml += "</steps>";
  return xml;
}

setupSprintTestPlan();

Running this script produces:

Created test plan: 847
Root suite ID: 848
Created configurations: Desktop Chrome, Desktop Firefox, Mobile Safari
Created shared steps: 12045
Created Smoke Tests suite: 849
Created New Features suite: 850
Created Regression suite: 851
Assigned configurations to Smoke Tests suite
Created test case: 12046 - Verify homepage loads correctly
Created test case: 12047 - Verify user can create a new project
Created test case: 12048 - Verify login with valid credentials
Added test cases to Smoke Tests suite

=== Sprint 47 Test Plan Setup Complete ===
Plan ID: 847
Configurations: 3
Test points: 3 test cases x 3 configs = 9

Next steps:
1. Add requirement-based suites for sprint user stories
2. Assign testers to test points
3. Create exploratory test charters

Common Issues and Troubleshooting

"You need a Basic + Test Plans access level"

Azure Test Plans requires the Basic + Test Plans access level or a Visual Studio Enterprise subscription. The Basic access level only provides read access to test plans. If testers see disabled Run buttons or cannot create test cases, check their access level in Organization Settings > Users. Stakeholder access has no test plan capabilities at all.

Test Points Not Generating for Configurations

When you assign configurations to a suite but existing test cases do not get new test points, the configurations only apply to newly added test cases. To regenerate test points for existing cases, remove the test cases from the suite and re-add them, or use the "Assign configurations" option at the test point level rather than the suite level.

Shared Steps Not Updating in Test Cases

Shared steps are cached when a test case is opened. If you update shared steps while a tester has the test runner open, they will see the old version. The tester needs to close and re-open the test runner to pick up shared step changes. Communicate shared step updates to the team before a test run, not during one.

Exploratory Testing Extension Not Connecting

The Test & Feedback extension requires the tester to be signed into their Azure DevOps organization within the extension settings. Common connection failures:

Error: "Unable to connect to Azure DevOps"
Fix: Check that the organization URL is correct (https://dev.azure.com/orgname, not the old visualstudio.com format)

Error: "You don't have permission to create work items"
Fix: The tester needs Contributor access to the project, not just Basic + Test Plans

Error: Extension icon is grayed out
Fix: The extension only works on HTTP/HTTPS pages. It will not activate on chrome:// or extension pages

Test Results Not Appearing in Analytics

Test results from manual test runs can take 15-30 minutes to appear in the Analytics views and OData feeds. If results are missing after an hour, verify that the test run was completed (not just started) and that the Analytics extension is installed and enabled for your organization. Draft or "In Progress" test runs do not publish results to Analytics until they are marked Complete.

Best Practices

  • Write test cases before development starts. When testers write test cases during sprint planning (based on acceptance criteria in user stories), they often identify ambiguities and missing requirements before any code is written. This is cheaper than finding issues in testing.

  • Keep test steps atomic and verifiable. Each step should have exactly one action and one verifiable expected result. "Navigate to the page, fill in the form, and submit" is three steps, not one. Atomic steps make failure diagnosis immediate -- you know exactly which action produced unexpected behavior.

  • Use shared steps aggressively. If a sequence of steps appears in more than two test cases, extract it as shared steps. The maintenance cost of not using shared steps grows linearly with the number of test cases, and login flow changes alone can invalidate dozens of test cases if the steps are duplicated.

  • Create configurations before test cases. Define your browser-OS-resolution matrix first, assign it to suites, then add test cases. This ensures every test case immediately generates the right number of test points. Adding configurations after test cases are already in the suite requires manual regeneration.

  • Time-box exploratory sessions. Open-ended exploratory testing either gets cut short by schedule pressure or expands to fill available time without focus. Define 30-45 minute charters with specific areas and risk hypotheses. Testers report more bugs per hour in focused sessions than in undirected exploration.

  • File bugs from the test runner, not separately. Always file bugs directly from the test runner or Test & Feedback extension rather than creating bugs manually in the backlog. The automatic context (repro steps, screenshots, system info, test case links) saves developers significant investigation time and gives testers credit for thorough bug reports.

  • Review test case quality quarterly. Test cases rot. Steps reference UI elements that have been renamed, expected results describe old behavior, and parameterized data becomes stale. Schedule quarterly reviews where the test team walks through high-priority test cases and updates them. Query-based suites can help identify test cases that have not been executed in over 90 days -- those are candidates for review or retirement.

  • Separate sprint testing from release testing. Sprint test plans track new feature testing and bug verification for the current sprint. Release test plans contain the full regression suite that runs before each release. Mixing them in a single plan makes progress tracking meaningless because sprint tests are expected to be 100% complete while release regression may run at a different cadence.

References

Powered by Contentful