Exploratory Testing with Azure Test Plans
A comprehensive guide to exploratory testing with Azure Test Plans, covering the Test & Feedback extension, session-based test management, charter creation, bug filing with automatic context, stakeholder feedback, and integrating exploratory sessions into sprint workflows.
Exploratory Testing with Azure Test Plans
Overview
Scripted test cases verify that known requirements work as expected. Exploratory testing finds the problems nobody thought to write a test case for -- the edge cases, the confusing workflows, the error messages that make no sense, the feature interactions that break when combined in ways the developer never anticipated. Azure Test Plans provides structured exploratory testing through the Test & Feedback browser extension, session-based test management, and deep integration with work item tracking. The result is exploratory testing that produces traceable, reproducible bugs instead of vague reports.
I have watched testers find more critical bugs in a focused 30-minute exploratory session than in a full day of running scripted test cases. The key is structure without rigidity -- defining what areas to explore and what risks to investigate while leaving the actual testing path up to the tester's intuition and experience. Azure DevOps gives you the tooling to make exploratory testing a first-class part of your quality process.
Prerequisites
- An Azure DevOps organization with an active project
- Azure Test Plans license (Basic + Test Plans or Visual Studio Enterprise subscription)
- The Test & Feedback browser extension installed (available for Chrome and Edge)
- A test plan created in the Test Plans hub
- Contributor access to the project for filing bugs and creating work items
- Familiarity with Azure DevOps work items and the Test Plans hub
The Test & Feedback Extension
The Test & Feedback extension is the primary tool for exploratory testing in Azure DevOps. It runs as a browser extension that sits alongside your application, capturing screenshots, recording actions, and filing bugs with full context -- all without leaving the page you are testing.
Installation and Setup
Install the extension from the Chrome Web Store or Edge Add-ons store. After installation:
- Click the extension icon in the browser toolbar
- Select "Connected" mode (requires Azure DevOps sign-in)
- Enter your Azure DevOps organization URL:
https://dev.azure.com/yourorg - Select the project and team you are testing for
- The extension is now ready to capture testing sessions
Connected mode is essential for full functionality. There is a "Standalone" mode that works without Azure DevOps, but it only provides basic screenshot capture without work item integration, session tracking, or automatic context.
Starting an Exploratory Session
Click the extension icon and select "Start session." The extension begins tracking your browser actions and timing. As you test, you have access to several capture tools:
Screenshot Capture: Click the camera icon or press the keyboard shortcut to capture the current browser tab. The screenshot opens in an annotation editor where you can:
- Draw rectangles, arrows, and freeform shapes to highlight issues
- Add text annotations pointing to specific UI problems
- Crop the image to focus on the relevant area
- The annotated screenshot is saved with the session and attached to any bugs you file
Screen Recording: Start a video recording to capture complex interaction sequences. This is invaluable for bugs that involve specific click sequences, drag-and-drop operations, or timing-dependent behavior. Recordings are saved as WebM files and attached to bugs.
Notes: Type free-form notes during testing. Notes are timestamped and associated with the current page URL. Use these for observations that do not warrant a bug -- "this button placement is confusing" or "the loading state takes 4 seconds on this page."
Action Log: The extension automatically records your browser actions -- page navigations, clicks, text input, and form submissions. This action log becomes the reproduction steps when you file a bug, so developers get an exact record of what you did before the issue appeared.
Filing Bugs During Exploratory Testing
When you find an issue during an exploratory session, click "Create Bug" in the extension. The bug work item is pre-populated with:
- Title: You provide this, describing the issue
- Repro Steps: Automatically generated from your action log, including page URLs, clicks, and input values
- System Info: Browser name and version, operating system, screen resolution, installed extensions
- Captured Screenshots: All annotated screenshots from the session are attached
- Screen Recordings: Any video recordings are attached
- Session Link: A link back to the exploratory testing session for context
This automatic context is the single biggest advantage of using the Test & Feedback extension over filing bugs manually. I have seen bug reports go from "the button doesn't work" to a detailed reproduction with exact steps, browser information, and annotated screenshots -- all generated automatically.
Filing Other Work Item Types
The extension is not limited to bugs. During exploratory testing, you can also create:
- Tasks: For follow-up items like "add better error handling here" or "performance feels slow, investigate"
- Test Cases: When you discover a scenario that should be part of the scripted test suite, create a test case directly from the extension with the steps pre-populated from your action log
Session-Based Test Management
Session-Based Test Management (SBTM) adds structure to exploratory testing without making it scripted. The concept was developed by James Bach and is widely used in professional testing. In SBTM, you define test charters, run time-boxed sessions, and debrief after each session.
Test Charters
A test charter is a brief document that tells the tester what to explore, what risks to investigate, and how much time to spend. In Azure DevOps, model charters as test case work items or tasks with a specific format:
Charter: Explore payment processing error handling
Area: Checkout module
Risk: Users may encounter unhandled errors during payment
Time Box: 30 minutes
Setup: Create a test account with $50 balance
Explore:
- What happens when payment fails mid-transaction?
- What happens when the user navigates away during payment processing?
- What happens with invalid card numbers, expired cards, insufficient funds?
- What happens when the payment service returns a timeout?
- What errors are displayed? Are they helpful?
Good charters share these properties:
- Focused: One feature area or risk per charter
- Time-boxed: 20-45 minutes is the sweet spot. Shorter sessions lack depth; longer sessions lose focus
- Risk-oriented: Frame the charter around what could go wrong, not what should work
- Specific enough to start: The tester should know where to begin without additional guidance
Organizing Sessions in Azure DevOps
Create an exploratory testing suite in your test plan:
Test Plan: Sprint 48 Testing
├── Scripted Tests
│ ├── New Features (Requirement-based)
│ └── Regression (Query-based)
└── Exploratory Testing
├── Charter: Payment error handling (30 min)
├── Charter: Dashboard performance with large datasets (45 min)
├── Charter: New user onboarding flow (30 min)
├── Charter: Mobile responsive layout (30 min)
└── Charter: API error responses (20 min)
Assign charters to testers based on their expertise. A tester who understands the payment system will find more issues in payment testing than someone unfamiliar with the domain.
Session Debriefs
After each session, the tester writes a debrief in the test result or as a comment on the charter work item:
Session Debrief: Payment error handling
Duration: 32 minutes
Tester: Shane
Tested:
- Invalid card numbers → Error displayed correctly
- Expired card → Error displayed correctly
- Insufficient funds → Error displayed but message is confusing
- Network timeout during processing → BUG: No error displayed, user stuck on loading spinner
- Browser back button during processing → BUG: Duplicate transaction created
Not Tested (ran out of time):
- Payment service returning 500 errors
- Concurrent payments from same user
Bugs Filed:
- Bug #4521: Payment timeout shows infinite loading spinner
- Bug #4522: Browser back button creates duplicate transaction
Risks Identified:
- No idempotency on payment endpoint -- duplicate transactions possible
- Loading spinner has no timeout -- user could wait indefinitely
Recommended Follow-up:
- Charter: Investigate payment idempotency (Priority: High)
- Add scripted test case for timeout scenario
Debriefs are the most valuable output of exploratory testing. They tell the team not just what was found, but what was not tested and what risks remain.
Stakeholder Feedback
The Test & Feedback extension has a stakeholder mode that allows non-testers -- product managers, designers, executives -- to provide structured feedback during demos or UAT sessions.
In stakeholder mode:
- The stakeholder browses the application normally
- When they notice something, they click the extension icon
- They can capture a screenshot, add annotations, and type feedback
- The feedback is submitted as a Feedback Response work item linked to the session
This replaces the common pattern of stakeholders sending emails with vague descriptions like "the colors look wrong on the report page." Instead, you get annotated screenshots with the exact page URL, browser info, and the stakeholder's specific comment.
Enabling Stakeholder Feedback
Stakeholders with Stakeholder access level (free) can use the Test & Feedback extension in stakeholder mode. They do not need Basic or Basic + Test Plans licenses. Configure the extension:
- Install the extension in the stakeholder's browser
- Sign in with their Azure DevOps credentials
- Select "Stakeholder" mode during setup
Feedback responses appear in the work items section and can be triaged by the team during sprint planning or backlog grooming.
Linking Exploratory Testing to Work Items
One of the most powerful features of exploratory testing in Azure DevOps is the ability to link sessions to specific work items before you start testing.
Testing Against a User Story
When you start an exploratory session, you can select a work item (user story, bug, or feature) to explore. The extension then:
- Associates all bugs and feedback filed during the session with that work item
- Creates "Tested By / Tests" traceability links
- Tracks exploratory coverage at the work item level -- you can see which stories have been explored and which have not
This answers the question "have we explored this user story?" with data rather than gut feeling.
Testing Against a Build or Release
You can also associate an exploratory session with a specific build or release. This lets you:
- Track which builds have been exploratory tested
- Filter bugs by the build they were found in
- Compare exploratory testing effort and bug discovery rates across builds
To associate with a build, select "Run with options" and choose the build or release when starting the session.
Exploratory Testing Metrics
Track these metrics to measure the effectiveness of your exploratory testing program:
Bug Discovery Rate
Bugs found per hour of exploratory testing. This metric tells you whether your charters are well-targeted. A rate of 2-3 bugs per hour for a new feature is healthy. If you are finding fewer than 1 bug per hour consistently, the area may be well-tested or the charters need refocusing.
Session Coverage
The percentage of planned charters that have been executed. Track this per sprint:
Sprint 48 Exploratory Testing:
Planned charters: 8
Completed: 6
Deferred: 2
Coverage: 75%
Bugs filed: 14
Bugs by severity:
Critical: 1
High: 4
Medium: 6
Low: 3
Charter Effectiveness
Which charters produce the most bugs? Track bug counts per charter area over multiple sprints. If payment testing consistently finds bugs and profile testing consistently finds nothing, reallocate time accordingly.
Complete Working Example
This example demonstrates a Node.js script that automates exploratory test session management -- creating charters, tracking sessions, and generating session reports using the Azure DevOps REST API.
var https = require("https");
var url = require("url");
var ORG = "my-organization";
var PROJECT = "my-project";
var PAT = process.env.AZURE_DEVOPS_PAT;
var API_VERSION = "7.1";
var BASE_URL = "https://dev.azure.com/" + ORG + "/" + PROJECT;
var AUTH = "Basic " + Buffer.from(":" + PAT).toString("base64");
function makeRequest(method, path, body) {
return new Promise(function (resolve, reject) {
var fullUrl = path.indexOf("https://") === 0 ? path : BASE_URL + path;
var parsed = url.parse(fullUrl);
var options = {
hostname: parsed.hostname,
path: parsed.path,
method: method,
headers: {
"Content-Type": "application/json",
Authorization: AUTH,
},
};
var req = https.request(options, function (res) {
var data = "";
res.on("data", function (chunk) {
data += chunk;
});
res.on("end", function () {
if (res.statusCode >= 200 && res.statusCode < 300) {
resolve(data ? JSON.parse(data) : null);
} else {
reject(new Error(method + " " + path + ": " + res.statusCode + " " + data));
}
});
});
req.on("error", reject);
if (body) {
req.write(JSON.stringify(body));
}
req.end();
});
}
function createCharter(planId, suiteId, charter) {
var stepsXml = '<steps id="0" last="1">';
stepsXml += '<step id="1" type="ActionStep">';
stepsXml += "<parameterizedString>" + escapeXml(charter.description) + "</parameterizedString>";
stepsXml += "<parameterizedString>" + escapeXml("Document findings in session debrief") + "</parameterizedString>";
stepsXml += "</step></steps>";
var patchDoc = [
{ op: "add", path: "/fields/System.Title", value: "Charter: " + charter.title },
{ op: "add", path: "/fields/System.Description", value: formatCharterDescription(charter) },
{ op: "add", path: "/fields/Microsoft.VSTS.Common.Priority", value: charter.priority || 2 },
{ op: "add", path: "/fields/Microsoft.VSTS.TCM.Steps", value: stepsXml },
{ op: "add", path: "/fields/System.Tags", value: "exploratory;charter;" + charter.area },
];
return makeRequest(
"POST",
"/_apis/wit/workitems/$Test%20Case?api-version=" + API_VERSION,
patchDoc
).then(function (testCase) {
// Add to suite
return makeRequest(
"POST",
"/_apis/testplan/Plans/" + planId + "/Suites/" + suiteId +
"/TestPoint?api-version=" + API_VERSION,
[{ workItem: { id: testCase.id } }]
).then(function () {
return testCase;
});
});
}
function formatCharterDescription(charter) {
var html = "<p><strong>Area:</strong> " + charter.area + "</p>";
html += "<p><strong>Risk:</strong> " + charter.risk + "</p>";
html += "<p><strong>Time Box:</strong> " + charter.timeBox + " minutes</p>";
if (charter.setup) {
html += "<p><strong>Setup:</strong> " + charter.setup + "</p>";
}
html += "<p><strong>Explore:</strong></p><ul>";
charter.explorationPoints.forEach(function (point) {
html += "<li>" + point + "</li>";
});
html += "</ul>";
return html;
}
function getSessionResults(planId, suiteId) {
return makeRequest(
"GET",
"/_apis/testplan/Plans/" + planId + "/Suites/" + suiteId +
"/TestPoint?api-version=" + API_VERSION
).then(function (response) {
var testPoints = response.value || [];
var results = {
total: testPoints.length,
completed: 0,
notRun: 0,
inProgress: 0,
charters: [],
};
testPoints.forEach(function (point) {
var outcome = point.results ? point.results.outcome : "None";
if (outcome === "Passed" || outcome === "Failed") {
results.completed++;
} else if (outcome === "InProgress") {
results.inProgress++;
} else {
results.notRun++;
}
results.charters.push({
id: point.testCase.id,
title: point.testCase.name,
outcome: outcome,
tester: point.tester ? point.tester.displayName : "Unassigned",
});
});
return results;
});
}
function generateSessionReport(planId, suiteId) {
return getSessionResults(planId, suiteId).then(function (results) {
console.log("=== Exploratory Testing Session Report ===");
console.log("");
console.log("Total charters: " + results.total);
console.log("Completed: " + results.completed);
console.log("In Progress: " + results.inProgress);
console.log("Not Run: " + results.notRun);
console.log(
"Coverage: " +
((results.completed / results.total) * 100).toFixed(1) +
"%"
);
console.log("");
console.log("Charter Details:");
console.log("-".repeat(80));
results.charters.forEach(function (charter) {
var status =
charter.outcome === "None"
? "NOT RUN"
: charter.outcome.toUpperCase();
console.log(
" [" + status + "] " + charter.title + " (" + charter.tester + ")"
);
});
return results;
});
}
function getBugsFromSession(sessionId) {
var wiql = {
query:
"SELECT [System.Id], [System.Title], [Microsoft.VSTS.Common.Severity] " +
"FROM WorkItems WHERE [System.WorkItemType] = 'Bug' " +
"AND [System.Tags] CONTAINS 'session-" + sessionId + "' " +
"ORDER BY [Microsoft.VSTS.Common.Severity] ASC",
};
return makeRequest(
"POST",
"/_apis/wit/wiql?api-version=" + API_VERSION,
wiql
).then(function (response) {
var ids = response.workItems.map(function (wi) {
return wi.id;
});
if (ids.length === 0) {
return [];
}
return makeRequest(
"GET",
"/_apis/wit/workitems?ids=" +
ids.join(",") +
"&fields=System.Id,System.Title,Microsoft.VSTS.Common.Severity,System.State" +
"&api-version=" + API_VERSION
).then(function (response) {
return response.value.map(function (wi) {
return {
id: wi.id,
title: wi.fields["System.Title"],
severity: wi.fields["Microsoft.VSTS.Common.Severity"],
state: wi.fields["System.State"],
};
});
});
});
}
function escapeXml(str) {
return String(str)
.replace(/&/g, "&")
.replace(/</g, "<")
.replace(/>/g, ">")
.replace(/"/g, """);
}
// Example usage
var PLAN_ID = parseInt(process.argv[2]) || 0;
var SUITE_ID = parseInt(process.argv[3]) || 0;
var action = process.argv[4] || "report";
if (!PLAN_ID || !SUITE_ID) {
console.log("Usage: node exploratory-sessions.js <planId> <suiteId> [action]");
console.log("Actions: create-charters, report");
process.exit(1);
}
if (action === "create-charters") {
var charters = [
{
title: "Payment error handling edge cases",
area: "Checkout",
risk: "Users encounter unhandled errors during payment processing",
timeBox: 30,
priority: 1,
setup: "Create test account with valid payment method",
explorationPoints: [
"What happens when payment times out mid-transaction?",
"What happens with invalid card numbers?",
"What happens when user navigates away during processing?",
"Are error messages clear and actionable?",
"Can duplicate payments be triggered by double-clicking?",
],
},
{
title: "Dashboard performance with large data",
area: "Dashboard",
risk: "Dashboard becomes unusable with high data volumes",
timeBox: 45,
priority: 2,
setup: "Seed database with 10,000+ records",
explorationPoints: [
"Does the dashboard load within 3 seconds?",
"Do charts render correctly with large datasets?",
"Does pagination work correctly?",
"What happens when filtering returns zero results?",
"Does sorting work on all columns?",
],
},
{
title: "New user onboarding flow",
area: "Authentication",
risk: "New users cannot complete registration or get confused",
timeBox: 30,
priority: 1,
setup: "Use a new email address not registered in the system",
explorationPoints: [
"Is the registration flow intuitive?",
"Are validation messages clear?",
"Does email verification work correctly?",
"What happens if the user closes the browser mid-registration?",
"Can the user recover from entering wrong information?",
],
},
];
var chain = Promise.resolve();
charters.forEach(function (charter) {
chain = chain.then(function () {
return createCharter(PLAN_ID, SUITE_ID, charter).then(function (tc) {
console.log("Created charter: " + tc.id + " - " + charter.title);
});
});
});
chain
.then(function () {
console.log("\nAll charters created successfully.");
})
.catch(function (err) {
console.error("Failed: " + err.message);
process.exit(1);
});
} else {
generateSessionReport(PLAN_ID, SUITE_ID).catch(function (err) {
console.error("Failed: " + err.message);
process.exit(1);
});
}
Running with create-charters:
$ node exploratory-sessions.js 847 852 create-charters
Created charter: 12060 - Payment error handling edge cases
Created charter: 12061 - Dashboard performance with large data
Created charter: 12062 - New user onboarding flow
All charters created successfully.
Running with report:
$ node exploratory-sessions.js 847 852 report
=== Exploratory Testing Session Report ===
Total charters: 3
Completed: 1
In Progress: 1
Not Run: 1
Coverage: 33.3%
Charter Details:
--------------------------------------------------------------------------------
[PASSED] Charter: Payment error handling edge cases (Shane)
[INPROGRESS] Charter: Dashboard performance with large data (Maria)
[NOT RUN] Charter: New user onboarding flow (Unassigned)
Common Issues and Troubleshooting
Extension Shows "Unable to Connect" After Sign-In
The Test & Feedback extension requires third-party cookies to be enabled for dev.azure.com. In Chrome, go to Settings > Privacy and Security > Cookies and add dev.azure.com and login.microsoftonline.com to the allowed list. Also verify that the organization URL format is correct -- use https://dev.azure.com/orgname, not the legacy https://orgname.visualstudio.com format.
Screenshots Are Blank or Black
This happens when the browser tab uses hardware acceleration that conflicts with the screenshot capture. Try disabling hardware acceleration in browser settings. On Windows, also check that the browser is not running in a compatibility mode. For pages using WebGL or complex canvas elements, the extension may capture a black frame -- use screen recording instead.
Session Data Not Appearing in Azure DevOps
Exploratory session data is saved to Azure DevOps when the session ends. If you close the browser without ending the session (clicking "Stop session"), the data may be lost. Always end sessions explicitly. If the extension crashes during a session, the data captured up to the last auto-save (every 5 minutes) should be recoverable. Check the extension's data management settings to verify auto-save is enabled.
Bugs Filed Without Action Log
The action log requires the extension to have permission to access page content. If the application runs in an iframe or uses Content Security Policy headers that block the extension, the action log may be empty. Check the browser console for CSP violation errors. For internal applications, adjust CSP headers to allow the extension. For external applications you cannot modify, rely on screenshots and manual reproduction steps.
Stakeholder Mode Has Limited Features
Stakeholder mode intentionally limits functionality to screenshot capture and feedback submission. Stakeholders cannot create bugs, tasks, or test cases -- they can only submit Feedback Response work items. If a stakeholder needs full functionality, they need a Basic + Test Plans access level, which costs additional per-user licensing.
Best Practices
Time-box every exploratory session. Open-ended exploratory testing either gets cut short by schedule pressure or expands unfocused. Define 20-45 minute sessions with specific charters. Testers consistently find more bugs per hour in focused sessions than in undirected exploration.
Write charters based on risk, not features. "Explore the checkout page" is a weak charter. "Investigate what happens when payment processing encounters network errors during the billing step" is strong. Risk-oriented charters focus testing on areas most likely to fail.
Always debrief after each session. The debrief is where you capture what was tested, what was not tested, and what risks remain. Without debriefs, exploratory testing produces bugs but no visibility into testing coverage or remaining risk.
Assign charters to testers with domain knowledge. A tester who understands the payment domain will probe edge cases that a generalist tester would miss. Match charter areas to tester expertise for maximum bug discovery.
Use the Test & Feedback extension in connected mode. Standalone mode loses the traceability and automatic context that makes exploratory testing data useful. Connected mode links everything to the project, sprint, and build.
Balance exploratory testing with scripted testing. They serve different purposes. Scripted tests verify known requirements; exploratory tests discover unknown problems. A typical sprint should allocate 60-70% of testing time to scripted execution and 30-40% to exploratory sessions, adjusted based on the maturity and risk of the features being tested.
Track metrics over time. Bug discovery rate, charter coverage, and session completion are leading indicators of testing quality. Review these monthly and adjust your exploratory testing investment based on the data.
Create scripted test cases from exploratory findings. When exploratory testing uncovers a significant bug, create a scripted test case covering that scenario. This prevents regression and ensures the fix is verified in future sprints. The Test & Feedback extension lets you create test cases directly from session context.