Test Plans

Manual Testing Workflows in Azure DevOps

Streamline manual testing in Azure DevOps with test runner workflows, bug filing, sign-off processes, and REST API automation

Manual Testing Workflows in Azure DevOps

Azure DevOps Test Plans provides a structured environment for running manual tests, tracking results, and managing sign-off workflows across your team. Despite the industry push toward full automation, manual testing remains essential for exploratory testing, usability validation, and scenarios where human judgment cannot be replaced by scripts. This article walks through the complete manual testing lifecycle in Azure DevOps, from configuring test runs to automating administrative tasks with the REST API.

Prerequisites

Before working through this article, you should have the following in place:

  • An Azure DevOps organization with a project that has Azure Test Plans enabled (requires Basic + Test Plans license or a Visual Studio Enterprise subscription)
  • At least one test plan with test suites and test cases defined
  • Node.js v16 or later installed on your machine
  • A Personal Access Token (PAT) with the Test Management (Read & Write) scope
  • Basic familiarity with the Azure DevOps web portal and REST API conventions

When Manual Testing Still Matters

Automated testing is not a silver bullet. There are entire classes of testing where human observation is irreplaceable:

  • Exploratory testing — investigating unfamiliar features without predefined scripts, looking for edge cases that automated suites never anticipated
  • Usability and accessibility testing — evaluating whether the interface is intuitive, whether screen readers handle content correctly, whether color contrast meets WCAG guidelines
  • Visual regression — automated pixel comparison tools catch obvious layout breaks, but a human eye spots subtleties like misaligned spacing or awkward text wrapping
  • Cross-device testing — running through flows on physical mobile devices, tablets, and various browsers where emulators fall short
  • Compliance and regulatory testing — auditors often require documented evidence that a human verified specific business rules

The goal is not to choose between manual and automated testing. It is to recognize where each approach delivers the most value and build workflows that make manual testing efficient and repeatable.

Azure Test Plans Web Runner

The web-based test runner is the primary tool for executing manual tests in Azure DevOps. You access it from Test Plans > Execute by selecting one or more test points and clicking Run.

The test runner opens in a side panel or a separate window, presenting each test step sequentially. For each step, the tester can:

  1. Read the action description and expected result
  2. Mark the step as Passed, Failed, or Not Applicable
  3. Add a comment explaining any deviation from expected behavior
  4. Attach screenshots, files, or screen recordings as evidence

The runner maintains state across browser sessions. If you close the browser mid-run, you can resume from where you left off. This is particularly useful for long test cases that span multiple system interactions or require waiting periods between steps.

Launching the Runner

Navigate to your test plan, select the suite containing your test cases, and check the test points you want to run. Click Run for web application to launch the runner in a new browser tab. The runner overlays on top of your application under test, so you can interact with the application and mark steps without switching contexts.

Step-by-Step Execution

Each test step displays the action to perform and the expected result. Mark each step as you go. If a step fails, you should document the actual behavior in the comment field before marking it failed. This creates an audit trail that developers can reference when triaging the defect.

Test Runner Features

Screenshots and Annotations

The test runner includes a built-in screenshot tool. Click the camera icon to capture the current browser window. After capture, an inline image editor opens where you can:

  • Draw rectangles or freehand shapes to highlight problem areas
  • Add text annotations pointing out specific UI elements
  • Crop the image to focus on the relevant portion

These annotated screenshots are automatically attached to the test step result and linked to any bugs filed from that step. This eliminates the workflow of capturing a screenshot externally, opening an image editor, annotating, saving, and manually uploading.

Screen Recording

For complex interaction sequences, the test runner supports screen recording. Start a recording before you begin the test steps, and the runner captures your interactions as a video file. When you stop recording, the video attaches to the test run. This is invaluable for reproducing timing-sensitive bugs or multi-step flows where a static screenshot does not convey the full context.

Action Log

The test runner automatically logs your actions as you interact with the application under test. Clicks, keystrokes, navigation events, and form submissions are recorded in a structured log. When you file a bug from the runner, this action log is included as reproduction steps. This feature alone saves significant time compared to manually writing "steps to reproduce" for every defect.

Running Tests with Configurations

Test configurations let you run the same test case across multiple environments without duplicating test cases. A configuration defines a set of variables — operating system, browser, screen resolution, locale, or any custom dimension.

For example, you might define three configurations:

Configuration OS Browser
Windows Chrome Windows 11 Chrome
Windows Firefox Windows 11 Firefox
macOS Safari macOS 14 Safari

When you assign these configurations to a test suite, Azure DevOps generates a test point for each combination of test case and configuration. A suite with 10 test cases and 3 configurations produces 30 test points. Each test point tracks its own pass/fail status independently.

This approach keeps your test case count manageable while ensuring coverage across your target environments. Testers can filter the test point list by configuration and focus on their assigned environment.

Tracking Test Progress

Azure DevOps provides several views for monitoring manual testing progress:

Charts and Widgets

From the Test Plans tab, the Chart view shows pass/fail/not-run distributions as pie charts or bar charts. You can pin these to your project dashboard so stakeholders see real-time progress without navigating into Test Plans.

Progress Report

The built-in Test Plans Progress Report (available under Analytics or Pipelines > Test Plans) aggregates results across suites and configurations. It shows:

  • Total test points vs. completed
  • Pass rate over time
  • Failure trends by suite or configuration
  • Blocked or not-applicable counts

Outcome Summary

At the test run level, the Run Summary page shows completion percentage, duration, defects filed, and outcome distribution. This is the view you share with stakeholders when requesting sign-off.

Bug Filing from the Test Runner

When a test step fails, click Create bug directly from the test runner. Azure DevOps pre-populates the bug work item with:

  • Title prefixed with the test case name
  • Repro Steps assembled from the test step actions, expected results, actual results, and your comments
  • System Info capturing browser version, OS, and screen resolution
  • Attachments including any screenshots, recordings, or action logs from the current step
  • Links connecting the bug to the test case, test run, and test result

This tight integration between testing and bug tracking ensures that every defect has full context. Developers do not need to ask "what were you doing when this happened?" because the answer is already embedded in the bug.

You can also link an existing bug to a failed step if the defect was already reported. This prevents duplicate bugs and helps you track which test failures map to known issues.

Assigning and Managing Test Runs

Creating Test Runs Manually

While most test runs are created implicitly when you click Run in Test Plans, you can also create runs explicitly via the Runs tab. This is useful when you want to:

  • Assign a specific set of test points to a particular tester
  • Schedule a run for a future sprint or release milestone
  • Create a formal run record for audit purposes before execution begins

Assigning Testers

Test points can be assigned to individual testers at the suite level. Select the test points, click Assign tester, and choose team members. Each tester then sees only their assigned test points when they open the suite, reducing confusion in large teams.

You can also bulk-assign by configuration. For example, assign all "macOS Safari" test points to the team member who has access to a Mac.

Run States

A test run progresses through these states:

  1. Not Started — created but no test points executed
  2. In Progress — at least one test point has a result
  3. Completed — all test points have outcomes
  4. Aborted — the run was cancelled before completion

Monitoring these states helps project managers identify stalled runs and follow up with testers.

Test Point Management

Test points are the atomic unit of execution in Azure Test Plans. Each test point represents one test case in one configuration assigned to one tester. Understanding test point management is key to running efficient manual testing cycles.

Resetting Test Points

After a bug fix, you often need to re-run failed tests. Rather than creating a new test run, you can reset specific test points to Not Run status. Select the failed test points, click Reset to active, and they return to the queue for re-execution.

Bulk Operations

For large test suites, bulk operations save time:

  • Bulk assign — assign dozens of test points to testers in one action
  • Bulk set outcome — mark multiple test points as passed (useful for regression suites where most tests pass and only a few need individual attention)
  • Bulk reset — reset all failed points for re-testing after a patch

Filtering

The test point grid supports filtering by:

  • Tester assignment
  • Configuration
  • Outcome (passed, failed, not run, blocked)
  • Priority
  • Test case title search

These filters help testers focus on their work and help managers identify bottlenecks.

Stakeholder Feedback Requests

Azure DevOps includes a Request Feedback feature that extends manual testing to stakeholders who may not have Test Plans licenses. You can send a feedback request via email to product owners, designers, or business analysts. They receive a link that opens a lightweight feedback tool in their browser.

The feedback tool allows stakeholders to:

  • Capture screenshots with annotations
  • Record their screen
  • Add typed notes
  • Rate their experience
  • Submit feedback that appears as work items in your backlog

This workflow bridges the gap between formal test execution and informal stakeholder review. It is especially useful during UAT (User Acceptance Testing) phases.

Session-Based Test Management

Session-based test management (SBTM) provides structure to exploratory testing without the rigidity of scripted test cases. In Azure DevOps, you use the Test & Feedback browser extension to run exploratory sessions.

A session typically includes:

  • Charter — a brief mission statement describing what to explore (e.g., "Explore the checkout flow with international addresses")
  • Time box — a fixed duration, usually 60–90 minutes
  • Notes — observations recorded during the session
  • Bugs — defects filed directly from the extension
  • Coverage — which areas of the application were tested

After the session, the tester writes a summary and the session data appears in the Runs tab under the associated test plan. Managers can review session coverage and identify untested areas.

Test Sign-Off Workflows

Sign-off is the formal process of declaring that testing is complete and the build is ready for release. Azure DevOps does not have a dedicated "sign-off" button, but you can build an effective sign-off workflow using existing features:

  1. Define exit criteria — document the pass rate threshold, zero critical bugs requirement, and coverage expectations in the test plan description
  2. Monitor progress — use the progress report and dashboard widgets to track against exit criteria
  3. Review outstanding bugs — ensure all bugs filed during testing are triaged and either fixed or deferred with justification
  4. Complete the test run — mark the run as complete once all test points have outcomes
  5. Export results — generate a report showing pass/fail distribution, linked bugs, and configuration coverage
  6. Update the test plan state — set the test plan to Completed to signal that testing is finished

For regulated environments, you may need to export test results as PDF or CSV files and store them in a controlled document repository. The REST API makes this straightforward to automate.

Automating Manual Test Administration with Node.js

The Azure DevOps REST API exposes the full Test Management surface. You can create test runs, assign testers, query results, and generate reports programmatically. This is valuable for:

  • Automating repetitive administrative tasks (creating runs for each sprint)
  • Generating custom sign-off reports that match your organization's template
  • Integrating test status into CI/CD gates
  • Syncing test data with external tools

Authentication Setup

All API calls require authentication via a Personal Access Token:

var https = require("https");

var ORG = "your-organization";
var PROJECT = "your-project";
var PAT = process.env.AZURE_DEVOPS_PAT;

var BASE_URL = "https://dev.azure.com/" + ORG + "/" + PROJECT + "/_apis";
var AUTH_HEADER = "Basic " + Buffer.from(":" + PAT).toString("base64");

function makeRequest(method, path, body) {
    return new Promise(function (resolve, reject) {
        var url = new URL(BASE_URL + path);
        var options = {
            hostname: url.hostname,
            path: url.pathname + url.search,
            method: method,
            headers: {
                "Authorization": AUTH_HEADER,
                "Content-Type": "application/json",
                "Accept": "application/json"
            }
        };

        var req = https.request(options, function (res) {
            var chunks = [];
            res.on("data", function (chunk) {
                chunks.push(chunk);
            });
            res.on("end", function () {
                var responseBody = Buffer.concat(chunks).toString();
                if (res.statusCode >= 200 && res.statusCode < 300) {
                    resolve(JSON.parse(responseBody));
                } else {
                    reject(new Error("HTTP " + res.statusCode + ": " + responseBody));
                }
            });
        });

        req.on("error", function (err) {
            reject(err);
        });

        if (body) {
            req.write(JSON.stringify(body));
        }
        req.end();
    });
}

Querying Test Plans and Suites

function getTestPlans() {
    return makeRequest("GET", "/test/plans?api-version=7.1");
}

function getTestSuites(planId) {
    return makeRequest("GET", "/test/plans/" + planId + "/suites?api-version=7.1");
}

function getTestPoints(planId, suiteId) {
    var path = "/test/plans/" + planId + "/suites/" + suiteId + "/points?api-version=7.1";
    return makeRequest("GET", path);
}

Creating a Test Run

function createTestRun(planId, suiteId, name, pointIds) {
    var body = {
        name: name,
        plan: { id: planId },
        pointIds: pointIds,
        automated: false,
        comment: "Manual test run created via REST API"
    };

    return makeRequest("POST", "/test/runs?api-version=7.1", body);
}

Updating Test Results

function updateTestResults(runId, results) {
    // results is an array of { testPoint, outcome, comment }
    var body = results.map(function (r) {
        return {
            testPoint: { id: r.testPoint },
            outcome: r.outcome,  // "Passed", "Failed", "Blocked", "NotApplicable"
            comment: r.comment || "",
            state: "Completed"
        };
    });

    return makeRequest("PATCH", "/test/runs/" + runId + "/results?api-version=7.1", body);
}

Generating a Sign-Off Report

function generateSignOffReport(runId) {
    return makeRequest("GET", "/test/runs/" + runId + "/results?api-version=7.1")
        .then(function (data) {
            var results = data.value;
            var summary = {
                total: results.length,
                passed: 0,
                failed: 0,
                blocked: 0,
                notRun: 0,
                bugs: []
            };

            results.forEach(function (result) {
                switch (result.outcome) {
                    case "Passed": summary.passed++; break;
                    case "Failed": summary.failed++; break;
                    case "Blocked": summary.blocked++; break;
                    default: summary.notRun++; break;
                }
            });

            summary.passRate = summary.total > 0
                ? ((summary.passed / summary.total) * 100).toFixed(1) + "%"
                : "0%";

            return summary;
        });
}

Complete Working Example

The following Node.js script ties everything together. It creates a test run from a specified plan and suite, assigns testers, collects results interactively, and generates a sign-off report.

var https = require("https");
var readline = require("readline");

// Configuration
var ORG = process.env.AZURE_DEVOPS_ORG || "your-organization";
var PROJECT = process.env.AZURE_DEVOPS_PROJECT || "your-project";
var PAT = process.env.AZURE_DEVOPS_PAT;
var BASE_URL = "https://dev.azure.com/" + ORG + "/" + PROJECT + "/_apis";
var AUTH_HEADER = "Basic " + Buffer.from(":" + PAT).toString("base64");
var API_VERSION = "7.1";

if (!PAT) {
    console.error("Error: AZURE_DEVOPS_PAT environment variable is required");
    process.exit(1);
}

// HTTP request helper
function makeRequest(method, path, body) {
    return new Promise(function (resolve, reject) {
        var url = new URL(BASE_URL + path);
        var options = {
            hostname: url.hostname,
            path: url.pathname + url.search,
            method: method,
            headers: {
                "Authorization": AUTH_HEADER,
                "Content-Type": "application/json",
                "Accept": "application/json"
            }
        };

        var req = https.request(options, function (res) {
            var chunks = [];
            res.on("data", function (chunk) {
                chunks.push(chunk);
            });
            res.on("end", function () {
                var responseBody = Buffer.concat(chunks).toString();
                if (res.statusCode >= 200 && res.statusCode < 300) {
                    try {
                        resolve(JSON.parse(responseBody));
                    } catch (e) {
                        resolve(responseBody);
                    }
                } else {
                    reject(new Error("HTTP " + res.statusCode + ": " + responseBody));
                }
            });
        });

        req.on("error", function (err) {
            reject(err);
        });

        if (body) {
            req.write(JSON.stringify(body));
        }
        req.end();
    });
}

// Prompt user for input
function prompt(question) {
    var rl = readline.createInterface({
        input: process.stdin,
        output: process.stdout
    });

    return new Promise(function (resolve) {
        rl.question(question, function (answer) {
            rl.close();
            resolve(answer.trim());
        });
    });
}

// List all test plans in the project
function listTestPlans() {
    return makeRequest("GET", "/test/plans?api-version=" + API_VERSION)
        .then(function (data) {
            return data.value || [];
        });
}

// Get test suites within a plan
function getTestSuites(planId) {
    var path = "/test/plans/" + planId + "/suites?api-version=" + API_VERSION;
    return makeRequest("GET", path)
        .then(function (data) {
            return data.value || [];
        });
}

// Get test points for a suite
function getTestPoints(planId, suiteId) {
    var path = "/test/plans/" + planId + "/suites/" + suiteId;
    path += "/points?api-version=" + API_VERSION;
    return makeRequest("GET", path)
        .then(function (data) {
            return data.value || [];
        });
}

// Create a manual test run
function createTestRun(planId, name, pointIds) {
    var body = {
        name: name,
        plan: { id: planId },
        pointIds: pointIds,
        automated: false,
        comment: "Created by manual-test-workflow script"
    };

    return makeRequest("POST", "/test/runs?api-version=" + API_VERSION, body);
}

// Get results for a test run
function getTestRunResults(runId) {
    var path = "/test/runs/" + runId + "/results?api-version=" + API_VERSION;
    return makeRequest("GET", path)
        .then(function (data) {
            return data.value || [];
        });
}

// Update a single test result
function updateResult(runId, resultId, outcome, comment) {
    var body = [{
        id: resultId,
        outcome: outcome,
        comment: comment || "",
        state: "Completed"
    }];

    return makeRequest("PATCH", "/test/runs/" + runId + "/results?api-version=" + API_VERSION, body);
}

// Complete a test run
function completeTestRun(runId) {
    var body = {
        state: "Completed",
        comment: "Run completed via manual-test-workflow script"
    };

    return makeRequest("PATCH", "/test/runs/" + runId + "?api-version=" + API_VERSION, body);
}

// Get bugs linked to test results
function getBugsForRun(runId) {
    return getTestRunResults(runId).then(function (results) {
        var bugPromises = results
            .filter(function (r) { return r.outcome === "Failed"; })
            .map(function (r) {
                var path = "/test/runs/" + runId + "/results/" + r.id;
                path += "/bugs?api-version=" + API_VERSION;
                return makeRequest("GET", path).catch(function () {
                    return { value: [] };
                });
            });

        return Promise.all(bugPromises).then(function (bugResults) {
            var allBugs = [];
            bugResults.forEach(function (br) {
                if (br.value) {
                    allBugs = allBugs.concat(br.value);
                }
            });
            return allBugs;
        });
    });
}

// Generate a comprehensive sign-off report
function generateSignOffReport(runId) {
    var report = {};

    return makeRequest("GET", "/test/runs/" + runId + "?api-version=" + API_VERSION)
        .then(function (run) {
            report.runName = run.name;
            report.runId = run.id;
            report.state = run.state;
            report.startedDate = run.startedDate;
            report.completedDate = run.completedDate;
            report.plan = run.plan ? run.plan.name : "Unknown";

            return getTestRunResults(runId);
        })
        .then(function (results) {
            report.totalTests = results.length;
            report.passed = 0;
            report.failed = 0;
            report.blocked = 0;
            report.notApplicable = 0;
            report.notRun = 0;
            report.failures = [];

            results.forEach(function (result) {
                switch (result.outcome) {
                    case "Passed":
                        report.passed++;
                        break;
                    case "Failed":
                        report.failed++;
                        report.failures.push({
                            testCase: result.testCase ? result.testCase.name : "Unknown",
                            configuration: result.configuration
                                ? result.configuration.name : "Default",
                            comment: result.comment || "No comment provided"
                        });
                        break;
                    case "Blocked":
                        report.blocked++;
                        break;
                    case "NotApplicable":
                        report.notApplicable++;
                        break;
                    default:
                        report.notRun++;
                        break;
                }
            });

            report.passRate = report.totalTests > 0
                ? ((report.passed / report.totalTests) * 100).toFixed(1)
                : "0";

            return getBugsForRun(runId);
        })
        .then(function (bugs) {
            report.linkedBugs = bugs.length;
            report.bugs = bugs.map(function (b) {
                return { id: b.id, title: b.title || "Untitled", state: b.state || "New" };
            });

            return report;
        });
}

// Print the sign-off report to console
function printReport(report) {
    console.log("");
    console.log("=".repeat(60));
    console.log("  MANUAL TEST SIGN-OFF REPORT");
    console.log("=".repeat(60));
    console.log("");
    console.log("  Run Name:       " + report.runName);
    console.log("  Run ID:         " + report.runId);
    console.log("  Test Plan:      " + report.plan);
    console.log("  State:          " + report.state);
    console.log("  Started:        " + (report.startedDate || "N/A"));
    console.log("  Completed:      " + (report.completedDate || "N/A"));
    console.log("");
    console.log("-".repeat(60));
    console.log("  RESULTS SUMMARY");
    console.log("-".repeat(60));
    console.log("  Total Tests:    " + report.totalTests);
    console.log("  Passed:         " + report.passed);
    console.log("  Failed:         " + report.failed);
    console.log("  Blocked:        " + report.blocked);
    console.log("  Not Applicable: " + report.notApplicable);
    console.log("  Not Run:        " + report.notRun);
    console.log("  Pass Rate:      " + report.passRate + "%");
    console.log("");

    if (report.failures.length > 0) {
        console.log("-".repeat(60));
        console.log("  FAILED TEST CASES");
        console.log("-".repeat(60));
        report.failures.forEach(function (f, i) {
            console.log("  " + (i + 1) + ". " + f.testCase);
            console.log("     Config:  " + f.configuration);
            console.log("     Comment: " + f.comment);
            console.log("");
        });
    }

    if (report.bugs.length > 0) {
        console.log("-".repeat(60));
        console.log("  LINKED BUGS");
        console.log("-".repeat(60));
        report.bugs.forEach(function (b) {
            console.log("  #" + b.id + " [" + b.state + "] " + b.title);
        });
        console.log("");
    }

    var signOffStatus = report.failed === 0 && report.blocked === 0
        && report.notRun === 0;
    console.log("-".repeat(60));
    console.log("  SIGN-OFF STATUS: " + (signOffStatus ? "READY" : "NOT READY"));
    if (!signOffStatus) {
        var reasons = [];
        if (report.failed > 0) reasons.push(report.failed + " failed tests");
        if (report.blocked > 0) reasons.push(report.blocked + " blocked tests");
        if (report.notRun > 0) reasons.push(report.notRun + " tests not executed");
        console.log("  Reason:         " + reasons.join(", "));
    }
    console.log("=".repeat(60));
    console.log("");
}

// Main workflow
function main() {
    var selectedPlanId;
    var selectedSuiteId;
    var testPoints;
    var createdRun;

    console.log("Azure DevOps Manual Test Workflow Manager");
    console.log("=========================================\n");

    listTestPlans()
        .then(function (plans) {
            if (plans.length === 0) {
                throw new Error("No test plans found in this project");
            }

            console.log("Available Test Plans:");
            plans.forEach(function (plan, i) {
                console.log("  " + (i + 1) + ". [" + plan.id + "] " + plan.name
                    + " (" + plan.state + ")");
            });

            return prompt("\nEnter plan number: ").then(function (answer) {
                var index = parseInt(answer, 10) - 1;
                if (index < 0 || index >= plans.length) {
                    throw new Error("Invalid plan selection");
                }
                selectedPlanId = plans[index].id;
                return getTestSuites(selectedPlanId);
            });
        })
        .then(function (suites) {
            if (suites.length === 0) {
                throw new Error("No test suites found in this plan");
            }

            console.log("\nAvailable Test Suites:");
            suites.forEach(function (suite, i) {
                console.log("  " + (i + 1) + ". [" + suite.id + "] " + suite.name
                    + " (" + suite.testCaseCount + " cases)");
            });

            return prompt("\nEnter suite number: ").then(function (answer) {
                var index = parseInt(answer, 10) - 1;
                if (index < 0 || index >= suites.length) {
                    throw new Error("Invalid suite selection");
                }
                selectedSuiteId = suites[index].id;
                return getTestPoints(selectedPlanId, selectedSuiteId);
            });
        })
        .then(function (points) {
            testPoints = points;
            if (testPoints.length === 0) {
                throw new Error("No test points found in this suite");
            }

            console.log("\nTest Points (" + testPoints.length + " total):");
            testPoints.forEach(function (point) {
                var tester = point.assignedTo
                    ? point.assignedTo.displayName : "Unassigned";
                var config = point.configuration
                    ? point.configuration.name : "Default";
                console.log("  [" + point.id + "] " + point.testCase.name
                    + " | " + config + " | " + tester
                    + " | " + point.outcome);
            });

            var runName = "Manual Run - " + new Date().toISOString().split("T")[0];
            return prompt("\nRun name [" + runName + "]: ").then(function (answer) {
                var name = answer || runName;
                var pointIds = testPoints.map(function (p) { return p.id; });
                return createTestRun(selectedPlanId, name, pointIds);
            });
        })
        .then(function (run) {
            createdRun = run;
            console.log("\nTest run created: #" + run.id + " - " + run.name);
            console.log("URL: https://dev.azure.com/" + ORG + "/" + PROJECT
                + "/_testManagement/runs?runId=" + run.id);

            return prompt("\nGenerate sign-off report now? (y/n): ");
        })
        .then(function (answer) {
            if (answer.toLowerCase() === "y") {
                return generateSignOffReport(createdRun.id).then(printReport);
            } else {
                console.log("Run the script again with --report " + createdRun.id
                    + " to generate the report later.");
            }
        })
        .catch(function (err) {
            console.error("Error: " + err.message);
            process.exit(1);
        });
}

// Handle --report flag for generating reports on existing runs
if (process.argv[2] === "--report" && process.argv[3]) {
    var runId = parseInt(process.argv[3], 10);
    console.log("Generating sign-off report for run #" + runId + "...\n");
    generateSignOffReport(runId)
        .then(printReport)
        .catch(function (err) {
            console.error("Error: " + err.message);
            process.exit(1);
        });
} else {
    main();
}

Save this as manual-test-workflow.js and run it:

export AZURE_DEVOPS_PAT="your-personal-access-token"
export AZURE_DEVOPS_ORG="your-organization"
export AZURE_DEVOPS_PROJECT="your-project"
node manual-test-workflow.js

To generate a sign-off report for an existing run:

node manual-test-workflow.js --report 12345

The script walks you through selecting a test plan, suite, and creating a run. The sign-off report gives you a clear picture of whether the build is ready for release.

Common Issues and Troubleshooting

Test Points Show "Not Applicable" After Configuration Changes

When you modify test configurations (add or remove configurations from a suite), existing test points may end up in an inconsistent state. Test points tied to a removed configuration are marked "Not Applicable" but are not deleted. To clean this up, go to the suite, filter by the old configuration, and manually remove those test points. Alternatively, use the REST API to bulk-delete stale points.

Test Runner Fails to Launch or Hangs on Loading

The web test runner requires third-party cookies to be enabled in your browser because it runs in an iframe that communicates with the Azure DevOps domain. If you use a strict browser privacy setting or an extension that blocks third-party cookies, the runner will hang indefinitely. Add dev.azure.com and *.visualstudio.com to your cookie allowlist. Disabling ad blockers for the Azure DevOps domain also resolves some loading issues.

Screenshots Not Capturing the Correct Window

The test runner's screenshot feature captures the browser tab where the runner is active, not necessarily the application under test. If you have the runner in a separate window and the application in another, you need to switch to the application window before clicking the screenshot button. Alternatively, use the Desktop capture mode (requires the Test Runner desktop client) which captures the entire screen regardless of which window is focused.

PAT Authentication Returns 401 on Test Management Endpoints

The Test Management API requires the vso.test_write scope on your PAT. If you created a PAT with only vso.test (read-only), all write operations — creating runs, updating results, completing runs — will return 401 Unauthorized. Generate a new PAT with Test Management (Read & Write) scope. Also verify that your user account has the Basic + Test Plans access level, not just Basic. The Test Plans license is required for both the web UI and the API.

Test Run Results Do Not Appear in Analytics Reports

Analytics reports in Azure DevOps rely on the Analytics service, which processes data asynchronously. After completing a test run, results may take 10 to 30 minutes to appear in analytics views and dashboard widgets. If results still do not appear after an hour, verify that the Analytics extension is installed and enabled for your organization under Organization Settings > Extensions.

Configurations Not Generating Expected Test Points

When you add a configuration to a suite, Azure DevOps only generates test points for test cases that are already in the suite. If you add the configuration first and then add test cases, the new test cases get test points for all configurations. But if you add test cases first and then add a configuration, you may need to manually refresh the suite or remove and re-add the configuration to trigger test point generation.

Best Practices

  • Keep test cases atomic. Each test case should validate one specific scenario. Long, multi-scenario test cases are harder to run, harder to debug when they fail, and produce ambiguous results.

  • Use shared steps for common workflows. Login sequences, navigation paths, and setup procedures should be defined as shared steps and referenced across test cases. This reduces maintenance when the common workflow changes.

  • Assign configurations deliberately. Do not apply every configuration to every suite. Map configurations to suites based on risk. Core functionality gets tested across all configurations. Edge-case scenarios may only need the primary configuration.

  • File bugs immediately from the test runner. Do not defer bug creation to after the test run. The test runner pre-populates context that you will lose if you try to recreate it later. The action log, system info, and screenshots are captured at the moment of failure and are most accurate then.

  • Reset and re-run rather than creating new runs. When retesting after bug fixes, reset the failed test points in the existing run rather than creating a new run. This preserves the testing history in one place and gives you an accurate picture of how many cycles a test case needed to pass.

  • Automate administrative tasks with the REST API. Creating runs, assigning testers, and generating reports should not be manual processes for every sprint. Write scripts that create runs from templates at sprint start and generate sign-off reports at sprint end.

  • Time-box exploratory testing sessions. Without a time box, exploratory testing tends to either expand indefinitely or get skipped entirely. A 60-minute session with a clear charter produces more consistent and actionable results than open-ended "just test it" instructions.

  • Review test results as a team. Schedule a brief test review meeting at the end of each cycle. Walk through failures, discuss patterns, and decide whether bugs are blockers. This shared context prevents misalignment between testers, developers, and product owners about release readiness.

  • Archive completed test plans. After a release ships, set the test plan state to Completed and avoid modifying it. Create a new test plan for the next release cycle. This preserves historical records and makes auditing straightforward.

References

Powered by Contentful