Test Plans

Exploratory Testing with Azure Test Plans

Conduct effective exploratory testing with Azure Test Plans using the Test & Feedback extension, session tracking, and automated reporting

Exploratory Testing with Azure Test Plans

Exploratory testing is the practice of simultaneously designing and executing tests without predefined scripts, relying on the tester's experience, intuition, and domain knowledge to uncover defects that scripted tests miss. Azure Test Plans provides first-class support for exploratory testing through the Test & Feedback browser extension, session tracking, and deep integration with work items. This article walks through the full workflow — from installing the extension and running sessions to building Node.js tooling that automates session management and reporting via the Azure DevOps REST API.

Prerequisites

  • An Azure DevOps organization with a project that has Azure Test Plans enabled (Basic + Test Plans license or Visual Studio Enterprise subscription)
  • A Chromium-based browser (Edge or Chrome) for the Test & Feedback extension
  • Node.js v14 or later installed locally
  • A Personal Access Token (PAT) with Test Management (Read & Write) and Work Items (Read & Write) scopes
  • Basic familiarity with Azure DevOps boards and work items

What Exploratory Testing Actually Is

Exploratory testing is not random clicking. It is a disciplined approach where testers use their understanding of the system to ask questions, form hypotheses, and probe the application in ways that scripted tests cannot anticipate. James Bach and Cem Kaner formalized the concept decades ago, but the core idea is simple: the human brain is better at finding unexpected problems than any script you write in advance.

In a scripted testing workflow, you define test cases first, then execute them. The tester follows a predetermined path. This works well for regression testing and compliance, but it creates blind spots. Scripted tests only verify what you already thought to check. Exploratory testing fills the gaps.

A good exploratory session has three components:

  1. A charter — a brief statement of what you intend to explore and why
  2. Time-boxing — a fixed duration (typically 30-90 minutes) to maintain focus
  3. Note-taking — continuous documentation of observations, questions, and bugs found

Azure Test Plans structures all three of these through the Test & Feedback extension and session management features.

The Azure Test & Feedback Extension

The Test & Feedback extension is a browser add-on that integrates directly with Azure DevOps. It captures your exploratory testing activity and sends it back to your project as structured session data.

Installation

Install the extension from the Chrome Web Store or Edge Add-ons marketplace. Search for "Azure Test & Feedback" — it is published by Microsoft. After installation, click the extension icon in your browser toolbar and connect it to your Azure DevOps organization.

You will need to provide:

  • Your Azure DevOps organization URL (e.g., https://dev.azure.com/yourorg)
  • Your project name
  • Authentication via your Microsoft account or PAT

Once connected, the extension shows a small floating toolbar on every page you visit. This toolbar is your control center for exploratory sessions.

Starting an Exploratory Session

Click the extension icon and select Start Session. You can optionally connect the session to a specific work item — a user story, a requirement, or a test case. This linkage is important because it gives your exploratory work traceability.

When the session starts, the extension begins tracking:

  • Every URL you visit
  • Time spent on each page
  • Any screenshots you take
  • Any screen recordings you capture
  • Any bugs, tasks, or test cases you create during the session

The session timeline builds automatically in the background. You do not need to do anything special — just test the application as you normally would.

Capturing Screenshots and Screen Recordings

Press the camera icon on the extension toolbar to capture a screenshot. The extension grabs the visible browser tab and opens an annotation editor. You can draw arrows, highlight areas, add text callouts, and blur sensitive information before saving.

For screen recordings, click the video icon. The extension records your browser tab (with audio if you enable it) and saves the recording as part of the session. This is extremely useful for reproducing intermittent bugs — you can show exactly what you did to trigger the issue.

Every screenshot and recording automatically attaches to the session timeline with a timestamp. When you create a bug from the session, these artifacts carry over as attachments.

Creating Bugs and Tasks from Sessions

This is where the Test & Feedback extension earns its keep. When you find a defect during an exploratory session, click the bug icon on the toolbar. The extension pre-populates a bug work item with:

  • Repro steps generated from your session activity (URLs visited, actions taken)
  • System info including browser version, OS, and screen resolution
  • Screenshots you captured during the session
  • Screen recordings if you recorded any

You can edit the bug details before submitting. The bug is created directly in Azure DevOps, linked to the session, and optionally linked to the work item you connected at session start.

You can also create tasks (for follow-up work) and test cases (to formalize a check you want to repeat) using the same workflow.

Session Timelines

Every exploratory testing session produces a timeline — a chronological record of everything that happened during the session. You can view session timelines in Azure Test Plans under Runs > Exploratory Testing Sessions.

The timeline shows:

  • URLs visited with timestamps
  • Screenshots and annotations
  • Bugs filed with links to work items
  • Notes and observations added during the session
  • Duration and page-level time breakdowns

This timeline is the raw evidence of your testing effort. It answers the question "what did the tester actually do?" — which is critical for audit trails and for understanding test coverage.

Connecting Sessions to Work Items

Exploratory sessions become far more valuable when they are connected to work items. When you start a session linked to a user story, Azure DevOps tracks which stories have been explored and which have not.

Navigate to Test Plans > Runs and filter by exploratory sessions. You will see a matrix showing:

  • Which work items have been explored
  • How many sessions covered each work item
  • How many bugs were found per work item
  • The total exploration time per work item

This data helps you answer a fundamental question: "Have we actually looked at this feature?" Scripted test coverage tells you which paths were verified. Exploratory session coverage tells you which areas received human attention.

Exploratory Testing Charters

A charter is a one- or two-sentence statement that guides an exploratory session. Good charters are specific enough to provide direction but broad enough to allow discovery.

Examples of effective charters:

  • "Explore the checkout flow with expired credit cards to find error handling gaps"
  • "Test the search feature with special characters and very long queries"
  • "Investigate performance when uploading files larger than 50MB"
  • "Explore the admin dashboard as a user with read-only permissions"

In Azure Test Plans, you can document charters in several ways:

  • As the title or description of the work item linked to the session
  • In the session notes within the Test & Feedback extension
  • As a dedicated test plan that lists charters as test suites

I prefer creating lightweight work items (tasks or user stories) for each charter and linking sessions to them. This gives you a backlog of testing ideas that you can prioritize and assign.

Session Insights and Analytics

Azure Test Plans aggregates exploratory testing data into analytics dashboards. Under Test Plans > Progress Report, you can view:

  • Exploration rate — percentage of work items that have been explored
  • Bug density — bugs found per session or per hour of testing
  • Session distribution — who tested what and when
  • Unexplored work items — features that have received zero exploratory attention

These metrics help test managers allocate effort. If a critical user story has zero exploratory sessions, that is a risk. If a low-priority feature has ten sessions and no bugs, you are probably over-testing it.

You can also build custom dashboards using Azure DevOps widgets. The "Test Results Trend" and "Chart for Work Items" widgets work well for tracking exploratory testing progress over sprints.

Stakeholder Feedback

Azure Test Plans supports a lighter-weight feedback mode for stakeholders who are not professional testers. The Test & Feedback extension can run in "Stakeholder" mode, which provides:

  • Screenshot capture with annotations
  • Feedback creation (similar to bugs but lighter-weight)
  • No session timeline tracking (simpler UX)

This mode requires only a Stakeholder access level — no Test Plans license needed. It is ideal for product owners, designers, and business analysts who want to report issues without dealing with full exploratory session management.

Stakeholder feedback items appear in Azure DevOps as "Feedback Response" work items that you can triage alongside bugs.

Exploratory vs. Scripted Testing

These two approaches are complementary, not competing. Here is how they compare:

Aspect Scripted Testing Exploratory Testing
Design Tests designed upfront Tests designed during execution
Coverage Predictable, repeatable Adaptive, investigative
Best for Regression, compliance New features, edge cases
Documentation Full test cases before execution Session notes and timelines
Skill required Following procedures Domain expertise, curiosity
Automation Easily automated Cannot be automated (human-driven)
Blind spots Only finds what was anticipated Finds the unexpected

In practice, a mature testing strategy uses both. Scripted tests form your safety net for regression. Exploratory tests find the bugs that your scripts never thought to check.

A common pattern is to run exploratory sessions on a new feature first, then formalize the important findings into scripted test cases for future regression coverage.

Integrating Exploratory Findings into Test Cases

When an exploratory session reveals an important behavior — whether it is a bug or a valid scenario — you should capture it as a scripted test case so it gets checked in future releases.

The Test & Feedback extension supports this directly. During a session, click the test case icon and the extension creates a new test case pre-populated with steps derived from your session activity. You can edit, refine, and add expected results before saving.

This workflow closes the loop between exploratory and scripted testing:

  1. Explorer finds an issue during a session
  2. Bug is filed and fixed
  3. Tester creates a scripted test case covering the scenario
  4. Test case runs in future regression cycles
  5. The defect never regresses

Without this formalization step, exploratory findings are one-time events. With it, they become permanent additions to your test suite.

Managing Exploratory Test Sessions via REST API with Node.js

Azure DevOps exposes exploratory testing data through the REST API, which means you can build tooling to automate session management, aggregate results, and generate reports. The relevant APIs live under the Test Management namespace.

Setting Up the API Client

var https = require("https");
var url = require("url");

var ORG = process.env.AZURE_DEVOPS_ORG;
var PROJECT = process.env.AZURE_DEVOPS_PROJECT;
var PAT = process.env.AZURE_DEVOPS_PAT;

var BASE_URL = "https://dev.azure.com/" + ORG + "/" + PROJECT;
var API_VERSION = "api-version=7.1-preview.1";

function getAuthHeader() {
  var token = Buffer.from(":" + PAT).toString("base64");
  return "Basic " + token;
}

function makeRequest(method, endpoint, body, callback) {
  var fullUrl = BASE_URL + endpoint;
  if (fullUrl.indexOf("?") === -1) {
    fullUrl += "?" + API_VERSION;
  } else {
    fullUrl += "&" + API_VERSION;
  }

  var parsed = url.parse(fullUrl);
  var options = {
    hostname: parsed.hostname,
    path: parsed.path,
    method: method,
    headers: {
      "Authorization": getAuthHeader(),
      "Content-Type": "application/json"
    }
  };

  var req = https.request(options, function(res) {
    var data = "";
    res.on("data", function(chunk) { data += chunk; });
    res.on("end", function() {
      if (res.statusCode >= 200 && res.statusCode < 300) {
        callback(null, JSON.parse(data || "{}"));
      } else {
        callback(new Error("HTTP " + res.statusCode + ": " + data));
      }
    });
  });

  req.on("error", function(err) { callback(err); });

  if (body) {
    req.write(JSON.stringify(body));
  }
  req.end();
}

Querying Test Sessions

function getExploratorySessions(callback) {
  var endpoint = "/_apis/test/session";
  makeRequest("GET", endpoint, null, function(err, result) {
    if (err) return callback(err);
    var sessions = result.value || [];
    callback(null, sessions);
  });
}

function getSessionById(sessionId, callback) {
  var endpoint = "/_apis/test/session/" + sessionId;
  makeRequest("GET", endpoint, null, callback);
}

Creating a Test Session Programmatically

function createTestSession(title, ownerId, areaPath, callback) {
  var endpoint = "/_apis/test/session";
  var body = {
    title: title,
    area: {
      name: areaPath
    },
    owner: {
      id: ownerId
    },
    state: "notStarted",
    revision: 0
  };

  makeRequest("POST", endpoint, body, function(err, session) {
    if (err) return callback(err);
    console.log("Created session: " + session.id + " - " + session.title);
    callback(null, session);
  });
}

Updating Session State

function updateSessionState(sessionId, newState, comment, callback) {
  var endpoint = "/_apis/test/session/" + sessionId;
  var validStates = ["notStarted", "inProgress", "paused", "completed", "declined"];

  if (validStates.indexOf(newState) === -1) {
    return callback(new Error("Invalid state: " + newState));
  }

  var body = {
    state: newState,
    comment: comment || ""
  };

  makeRequest("PATCH", endpoint, body, function(err, result) {
    if (err) return callback(err);
    console.log("Session " + sessionId + " updated to: " + newState);
    callback(null, result);
  });
}

Fetching Work Items Linked to Sessions

function getSessionWorkItems(sessionId, callback) {
  var endpoint = "/_apis/test/session/" + sessionId + "/workitems";
  makeRequest("GET", endpoint, null, function(err, result) {
    if (err) return callback(err);
    callback(null, result.value || []);
  });
}

function getBugsFromSession(sessionId, callback) {
  getSessionWorkItems(sessionId, function(err, workItems) {
    if (err) return callback(err);
    var bugs = workItems.filter(function(wi) {
      return wi.workItemType === "Bug";
    });
    callback(null, bugs);
  });
}

Complete Working Example: Exploratory Testing Session Manager

This Node.js tool creates exploratory testing sessions from charter definitions, tracks their progress, and generates summary reports.

var https = require("https");
var url = require("url");
var fs = require("fs");

// --- Configuration ---
var CONFIG = {
  org: process.env.AZURE_DEVOPS_ORG,
  project: process.env.AZURE_DEVOPS_PROJECT,
  pat: process.env.AZURE_DEVOPS_PAT,
  apiVersion: "7.1-preview.1"
};

var BASE_URL = "https://dev.azure.com/" + CONFIG.org + "/" + CONFIG.project;

// --- HTTP Client ---
function request(method, endpoint, body, callback) {
  var fullUrl = BASE_URL + endpoint;
  var separator = fullUrl.indexOf("?") === -1 ? "?" : "&";
  fullUrl += separator + "api-version=" + CONFIG.apiVersion;

  var parsed = url.parse(fullUrl);
  var token = Buffer.from(":" + CONFIG.pat).toString("base64");

  var options = {
    hostname: parsed.hostname,
    path: parsed.path,
    method: method,
    headers: {
      "Authorization": "Basic " + token,
      "Content-Type": "application/json"
    }
  };

  var req = https.request(options, function(res) {
    var chunks = [];
    res.on("data", function(chunk) { chunks.push(chunk); });
    res.on("end", function() {
      var raw = Buffer.concat(chunks).toString();
      if (res.statusCode >= 200 && res.statusCode < 300) {
        try {
          callback(null, JSON.parse(raw || "{}"));
        } catch (e) {
          callback(null, {});
        }
      } else {
        callback(new Error("HTTP " + res.statusCode + ": " + raw));
      }
    });
  });

  req.on("error", callback);
  if (body) req.write(JSON.stringify(body));
  req.end();
}

// --- Session Management ---
function createSession(charter, areaPath, ownerId, callback) {
  var body = {
    title: charter,
    area: { name: areaPath },
    owner: { id: ownerId },
    state: "notStarted",
    revision: 0
  };

  request("POST", "/_apis/test/session", body, function(err, session) {
    if (err) return callback(err);
    console.log("[CREATE] Session #" + session.id + ": " + charter);
    callback(null, session);
  });
}

function listSessions(callback) {
  request("GET", "/_apis/test/session", null, function(err, result) {
    if (err) return callback(err);
    callback(null, result.value || []);
  });
}

function completeSession(sessionId, comment, callback) {
  var body = {
    state: "completed",
    comment: comment
  };

  request("PATCH", "/_apis/test/session/" + sessionId, body, function(err, result) {
    if (err) return callback(err);
    console.log("[COMPLETE] Session #" + sessionId);
    callback(null, result);
  });
}

function getSessionWorkItems(sessionId, callback) {
  request("GET", "/_apis/test/session/" + sessionId + "/workitems", null, function(err, result) {
    if (err) return callback(err);
    callback(null, result.value || []);
  });
}

// --- Batch Session Creation from Charters ---
function createSessionsFromCharters(charters, areaPath, ownerId, callback) {
  var results = [];
  var index = 0;

  function next() {
    if (index >= charters.length) {
      return callback(null, results);
    }

    var charter = charters[index];
    index++;

    createSession(charter, areaPath, ownerId, function(err, session) {
      if (err) {
        console.error("[ERROR] Failed to create session for: " + charter);
        console.error("  " + err.message);
        results.push({ charter: charter, error: err.message });
      } else {
        results.push({ charter: charter, sessionId: session.id, state: session.state });
      }

      // Rate limit protection — 200ms delay between calls
      setTimeout(next, 200);
    });
  }

  next();
}

// --- Report Generation ---
function generateReport(callback) {
  listSessions(function(err, sessions) {
    if (err) return callback(err);

    var report = {
      generatedAt: new Date().toISOString(),
      totalSessions: sessions.length,
      byState: {},
      byOwner: {},
      sessions: []
    };

    // Aggregate by state
    sessions.forEach(function(session) {
      var state = session.state || "unknown";
      report.byState[state] = (report.byState[state] || 0) + 1;

      var owner = session.owner ? session.owner.displayName : "Unassigned";
      if (!report.byOwner[owner]) {
        report.byOwner[owner] = { total: 0, completed: 0, inProgress: 0 };
      }
      report.byOwner[owner].total++;
      if (state === "completed") report.byOwner[owner].completed++;
      if (state === "inProgress") report.byOwner[owner].inProgress++;
    });

    // Process each session for work item details
    var processed = 0;

    if (sessions.length === 0) {
      return callback(null, report);
    }

    sessions.forEach(function(session) {
      getSessionWorkItems(session.id, function(wiErr, workItems) {
        var bugs = [];
        var tasks = [];
        var testCases = [];

        if (!wiErr && workItems.length > 0) {
          workItems.forEach(function(wi) {
            if (wi.workItemType === "Bug") bugs.push(wi);
            else if (wi.workItemType === "Task") tasks.push(wi);
            else if (wi.workItemType === "Test Case") testCases.push(wi);
          });
        }

        report.sessions.push({
          id: session.id,
          title: session.title,
          state: session.state,
          owner: session.owner ? session.owner.displayName : "Unassigned",
          startedDate: session.startedDate || null,
          completedDate: session.completedDate || null,
          bugsFound: bugs.length,
          tasksCreated: tasks.length,
          testCasesCreated: testCases.length
        });

        processed++;

        if (processed === sessions.length) {
          // Sort sessions by ID
          report.sessions.sort(function(a, b) { return a.id - b.id; });

          // Calculate summary stats
          var totalBugs = 0;
          var totalTasks = 0;
          var totalTestCases = 0;
          report.sessions.forEach(function(s) {
            totalBugs += s.bugsFound;
            totalTasks += s.tasksCreated;
            totalTestCases += s.testCasesCreated;
          });

          report.summary = {
            totalBugsFound: totalBugs,
            totalTasksCreated: totalTasks,
            totalTestCasesFormalized: totalTestCases,
            avgBugsPerSession: sessions.length > 0
              ? (totalBugs / sessions.length).toFixed(2)
              : 0
          };

          callback(null, report);
        }
      });
    });
  });
}

function formatReportAsMarkdown(report) {
  var lines = [];
  lines.push("# Exploratory Testing Report");
  lines.push("");
  lines.push("Generated: " + report.generatedAt);
  lines.push("");
  lines.push("## Summary");
  lines.push("");
  lines.push("| Metric | Value |");
  lines.push("|--------|-------|");
  lines.push("| Total Sessions | " + report.totalSessions + " |");

  if (report.summary) {
    lines.push("| Total Bugs Found | " + report.summary.totalBugsFound + " |");
    lines.push("| Total Tasks Created | " + report.summary.totalTasksCreated + " |");
    lines.push("| Test Cases Formalized | " + report.summary.totalTestCasesFormalized + " |");
    lines.push("| Avg Bugs per Session | " + report.summary.avgBugsPerSession + " |");
  }

  lines.push("");
  lines.push("## Sessions by State");
  lines.push("");
  Object.keys(report.byState).forEach(function(state) {
    lines.push("- **" + state + "**: " + report.byState[state]);
  });

  lines.push("");
  lines.push("## Sessions by Owner");
  lines.push("");
  lines.push("| Owner | Total | Completed | In Progress |");
  lines.push("|-------|-------|-----------|-------------|");
  Object.keys(report.byOwner).forEach(function(owner) {
    var stats = report.byOwner[owner];
    lines.push("| " + owner + " | " + stats.total + " | " + stats.completed + " | " + stats.inProgress + " |");
  });

  lines.push("");
  lines.push("## Session Details");
  lines.push("");
  lines.push("| ID | Charter | State | Bugs | Tasks | Test Cases |");
  lines.push("|----|---------|-------|------|-------|------------|");

  if (report.sessions) {
    report.sessions.forEach(function(s) {
      lines.push("| " + s.id + " | " + s.title + " | " + s.state + " | " +
        s.bugsFound + " | " + s.tasksCreated + " | " + s.testCasesCreated + " |");
    });
  }

  return lines.join("\n");
}

// --- CLI Interface ---
function main() {
  var args = process.argv.slice(2);
  var command = args[0];

  if (!CONFIG.org || !CONFIG.pat || !CONFIG.project) {
    console.error("Error: Set AZURE_DEVOPS_ORG, AZURE_DEVOPS_PROJECT, and AZURE_DEVOPS_PAT");
    process.exit(1);
  }

  if (command === "create") {
    var charter = args[1];
    var areaPath = args[2] || CONFIG.project;
    var ownerId = args[3];

    if (!charter || !ownerId) {
      console.error("Usage: node tool.js create <charter> [areaPath] <ownerId>");
      process.exit(1);
    }

    createSession(charter, areaPath, ownerId, function(err, session) {
      if (err) {
        console.error("Failed: " + err.message);
        process.exit(1);
      }
      console.log(JSON.stringify(session, null, 2));
    });

  } else if (command === "create-batch") {
    var charterFile = args[1];
    var batchAreaPath = args[2] || CONFIG.project;
    var batchOwnerId = args[3];

    if (!charterFile || !batchOwnerId) {
      console.error("Usage: node tool.js create-batch <charters.json> [areaPath] <ownerId>");
      process.exit(1);
    }

    var charters = JSON.parse(fs.readFileSync(charterFile, "utf8"));
    createSessionsFromCharters(charters, batchAreaPath, batchOwnerId, function(err, results) {
      if (err) {
        console.error("Batch failed: " + err.message);
        process.exit(1);
      }
      console.log("\nBatch Results:");
      console.log(JSON.stringify(results, null, 2));
    });

  } else if (command === "list") {
    listSessions(function(err, sessions) {
      if (err) {
        console.error("Failed: " + err.message);
        process.exit(1);
      }
      sessions.forEach(function(s) {
        console.log("#" + s.id + " [" + s.state + "] " + s.title);
      });
    });

  } else if (command === "complete") {
    var sessionId = parseInt(args[1], 10);
    var completionComment = args[2] || "Session completed";

    if (!sessionId) {
      console.error("Usage: node tool.js complete <sessionId> [comment]");
      process.exit(1);
    }

    completeSession(sessionId, completionComment, function(err) {
      if (err) {
        console.error("Failed: " + err.message);
        process.exit(1);
      }
      console.log("Session completed successfully.");
    });

  } else if (command === "report") {
    var outputFile = args[1] || "exploratory-report.md";

    generateReport(function(err, report) {
      if (err) {
        console.error("Failed: " + err.message);
        process.exit(1);
      }

      var markdown = formatReportAsMarkdown(report);
      fs.writeFileSync(outputFile, markdown);
      console.log("Report written to: " + outputFile);

      // Also write JSON version
      var jsonFile = outputFile.replace(".md", ".json");
      fs.writeFileSync(jsonFile, JSON.stringify(report, null, 2));
      console.log("JSON data written to: " + jsonFile);
    });

  } else {
    console.log("Exploratory Testing Session Manager");
    console.log("");
    console.log("Commands:");
    console.log("  create <charter> [areaPath] <ownerId>     Create a single session");
    console.log("  create-batch <file.json> [areaPath] <ownerId>  Create sessions from charter list");
    console.log("  list                                       List all sessions");
    console.log("  complete <sessionId> [comment]             Mark session as completed");
    console.log("  report [output.md]                         Generate summary report");
    console.log("");
    console.log("Environment variables:");
    console.log("  AZURE_DEVOPS_ORG       Organization name");
    console.log("  AZURE_DEVOPS_PROJECT   Project name");
    console.log("  AZURE_DEVOPS_PAT       Personal Access Token");
  }
}

main();

Save your charter list as a JSON array:

[
  "Explore login flow with expired sessions and concurrent logins",
  "Test file upload with oversized files and unsupported formats",
  "Investigate search results relevance with misspellings and partial matches",
  "Explore admin role permissions when accessing restricted endpoints",
  "Test notification delivery timing under high server load"
]

Run the tool:

# Set environment variables
export AZURE_DEVOPS_ORG="myorg"
export AZURE_DEVOPS_PROJECT="MyProject"
export AZURE_DEVOPS_PAT="your-pat-here"

# Create sessions from a charter file
node tool.js create-batch charters.json "MyProject\\QA" "owner-guid-here"

# List all sessions
node tool.js list

# Complete a session
node tool.js complete 42 "Found 3 bugs in file upload handling"

# Generate a report
node tool.js report sprint-22-exploratory.md

The generated Markdown report gives you a sprint-level view of exploratory testing effort, bug density, and coverage gaps.

Common Issues and Troubleshooting

1. Test & Feedback extension does not connect to Azure DevOps

This usually happens when your organization uses conditional access policies or your browser blocks third-party cookies. Try using Edge with InPrivate mode disabled, and make sure dev.azure.com and *.visualstudio.com are allowed in your cookie settings. If your org uses Azure AD conditional access, the extension may need to be allow-listed by your IT admin.

2. Screenshots are blank or show the wrong tab

The extension captures the active tab content. If you switch tabs immediately after clicking the screenshot button, you get a blank or wrong capture. Wait for the annotation editor to appear before switching tabs. Also, some browser security policies prevent capturing certain pages (banking sites, internal corporate portals with Content-Security-Policy headers).

3. Session data does not appear in Azure Test Plans

Sessions require an active connection to Azure DevOps. If your network drops during a session, the extension caches data locally but may not sync automatically when connectivity returns. Open the extension settings and click "Sync" to force a reconnection. Also verify that the user running the session has the correct license — Basic access alone does not include Test Plans features.

4. REST API returns 403 on session endpoints

The test session APIs require the PAT to have "Test Management" scope at the Read & Write level. A common mistake is creating a PAT with only "Work Items" scope and expecting it to access test sessions. Regenerate your PAT with the correct scopes. Also check that the user associated with the PAT has at least Basic + Test Plans access level in the organization.

5. Exploratory sessions do not link to work items automatically

You must explicitly connect a session to a work item when starting it through the extension. There is no automatic linking based on which page you are testing. If you forget to connect at session start, you can still manually link work items to the session afterward by editing the session in Azure Test Plans and adding associated work items through the UI.

Best Practices

  • Always use charters. Uncharterted exploratory sessions tend to wander aimlessly. Even a single sentence like "Explore the payment flow with invalid inputs" keeps you focused and makes your session data meaningful to others.

  • Time-box strictly. Set a timer for 60 minutes. When it rings, stop and document your findings. Long, unfocused sessions produce diminishing returns. Short, intense sessions with clear charters find more bugs per hour.

  • File bugs immediately during the session. Do not wait until after the session to create bugs from memory. The Test & Feedback extension captures context automatically — repro steps, screenshots, system info — but only if you file the bug while the session is active.

  • Link every session to a work item. Unlinked sessions are invisible in coverage reports. If you want to answer "have we explored the new checkout feature?" every session testing that feature must be linked to the corresponding user story.

  • Convert critical findings into scripted test cases. Exploratory testing is ephemeral by nature. If you find an important edge case, formalize it as a test case so it gets checked in every future regression cycle. The extension makes this a one-click operation.

  • Rotate testers across features. Fresh eyes find different bugs. If the same person tests the same feature repeatedly, they develop blind spots. Assign charters to people who did not build the feature and have not tested it before.

  • Review session timelines in retrospectives. Session timelines are a goldmine of information about how your team actually tests. Review them during sprint retrospectives to identify testing patterns, gaps, and opportunities for improvement.

  • Use the REST API for reporting at scale. Once you have more than a handful of sessions, manual tracking breaks down. Build automated reports using the Node.js tooling shown above to aggregate session data across sprints and track trends over time.

  • Pair exploratory testing with monitoring. Run an exploratory session while watching server logs, application performance metrics, or error tracking dashboards. You will catch issues that are invisible in the browser — slow queries, memory leaks, error spikes — that traditional exploratory testing misses.

References

Powered by Contentful