Test Result Analysis and Trend Reporting
A practical guide to analyzing test results and building trend reports in Azure DevOps, covering the Analytics service, dashboard widgets, test outcome trends, flaky test detection, custom reports via REST API, and OData queries for Power BI integration.
Test Result Analysis and Trend Reporting
Overview
Running tests is the easy part. Understanding what the results mean over time -- which tests are consistently failing, which are flaky, whether quality is trending up or down across sprints, and where to focus testing effort -- is where most teams fall short. Azure DevOps provides built-in Analytics, dashboard widgets, and REST APIs for test result analysis, but you have to configure them deliberately. Out of the box, you get a pass/fail count per build. With proper setup, you get trend charts, flaky test identification, pass rate histories, and duration regression detection.
I have built custom test dashboards for teams that went from "we run tests in CI" to "we know exactly which areas of the codebase are undertested, which tests waste the most time, and which tests lie about their results." The built-in widgets cover 80% of what you need. The REST API and OData feed cover the rest. This article walks through both, with working scripts for every common reporting scenario.
Prerequisites
- An Azure DevOps organization with Azure Pipelines running automated tests
- Test results published via
PublishTestResults@2task (at least 2-3 weeks of history) - Azure DevOps Analytics enabled (it is on by default for Azure DevOps Services)
- Dashboard edit permissions for creating widgets
- Node.js 18+ for the custom reporting scripts
- Basic familiarity with Azure DevOps dashboards and widgets
Built-In Test Analytics
The Test Results Trend Widget
Azure DevOps provides a "Test Results Trend" dashboard widget that shows pass/fail trends over time. Add it to your project dashboard:
- Navigate to Dashboards > Edit
- Click Add Widget and search for "Test Results Trend"
- Configure:
- Pipeline: Select the pipeline(s) to track
- Period: Last 7/14/30 days or custom range
- Group by: Date, branch, or test suite
- Outcome filter: All, Passed, Failed, or specific outcomes
This widget shows a stacked bar chart with passed, failed, and other outcomes per build. It is the quickest way to see if your test suite is healthy.
The Test Results Trend (Advanced) Widget
The advanced widget adds:
- Duration trend: See if tests are getting slower over time
- Pass rate line: Overlay a pass percentage line on the bar chart
- Filtering by test file or namespace: Drill into specific areas
Configure it to show the last 30 days of your main branch pipeline. If the pass rate dips below 95%, investigate immediately.
Pipeline Test Tab
Every pipeline run has a Tests tab that shows:
- Total tests, passed, failed, not executed
- Individual test results with duration
- Failed test stack traces and error messages
- Comparison with the previous run (new failures, fixed tests)
- Grouping by test file, namespace, or outcome
This tab is populated automatically when you use PublishTestResults@2. No additional configuration needed.
Querying Test Results via REST API
For custom analysis beyond what the dashboard widgets provide, use the Test Results REST API.
Get Test Runs for a Pipeline
// get-test-runs.js
var https = require("https");
var org = process.env.AZURE_DEVOPS_ORG || "my-organization";
var project = process.env.AZURE_DEVOPS_PROJECT || "my-project";
var pat = process.env.AZURE_DEVOPS_PAT;
var auth = Buffer.from(":" + pat).toString("base64");
function apiGet(path, callback) {
var options = {
hostname: "dev.azure.com",
path: "/" + org + "/" + project + "/_apis" + path,
method: "GET",
headers: { "Authorization": "Basic " + auth, "Accept": "application/json" }
};
var req = https.request(options, function(res) {
var data = "";
res.on("data", function(chunk) { data += chunk; });
res.on("end", function() { callback(null, res.statusCode, JSON.parse(data)); });
});
req.on("error", function(err) { callback(err); });
req.end();
}
// Get test runs from the last 14 days
var minDate = new Date();
minDate.setDate(minDate.getDate() - 14);
var dateStr = minDate.toISOString();
apiGet("/test/runs?minLastUpdatedDate=" + dateStr + "&api-version=7.1", function(err, status, data) {
if (err) return console.error("Error:", err.message);
var runs = data.value || [];
console.log("Test Runs (last 14 days): " + runs.length);
console.log("=================================");
runs.forEach(function(run) {
var passRate = run.totalTests > 0 ?
(run.passedTests / run.totalTests * 100).toFixed(1) : "N/A";
console.log(run.name);
console.log(" Date: " + new Date(run.completedDate).toLocaleDateString());
console.log(" Total: " + run.totalTests + " | Passed: " + run.passedTests +
" | Failed: " + run.unanalyzedTests);
console.log(" Pass rate: " + passRate + "%");
console.log(" State: " + run.state);
console.log("");
});
});
Get Failed Test Details
// get-failures.js
var runId = process.argv[2];
if (!runId) {
console.error("Usage: node get-failures.js <testRunId>");
process.exit(1);
}
apiGet("/test/runs/" + runId + "/results?outcomes=Failed&api-version=7.1", function(err, status, data) {
if (err) return console.error("Error:", err.message);
var results = data.value || [];
console.log("Failed Tests in Run #" + runId + " (" + results.length + " failures):");
console.log("");
results.forEach(function(result) {
console.log(" " + result.testCaseTitle);
console.log(" Duration: " + result.durationInMs + "ms");
console.log(" Error: " + (result.errorMessage || "No message").substring(0, 200));
if (result.stackTrace) {
console.log(" Stack: " + result.stackTrace.substring(0, 300) + "...");
}
console.log("");
});
});
Flaky Test Detection
Flaky tests pass and fail intermittently without code changes. They erode trust in the test suite and waste developer time investigating phantom failures. Azure DevOps has built-in flaky test detection, but you can also build custom detection.
Built-In Flaky Test Detection
Enable it in Project Settings > Pipelines > Test Management > Flaky Test Detection. Azure DevOps analyzes test results across multiple runs and flags tests that alternate between pass and fail outcomes on the same code.
Once enabled:
- Flaky tests appear with a yellow warning icon in the Tests tab
- You can configure pipelines to not fail on flaky test results
- The system tracks flaky history per test
Custom Flaky Detection Script
For more control, build your own detection based on historical results:
// detect-flaky.js -- Find tests that flip between pass/fail
var https = require("https");
var org = process.env.AZURE_DEVOPS_ORG || "my-organization";
var project = process.env.AZURE_DEVOPS_PROJECT || "my-project";
var pat = process.env.AZURE_DEVOPS_PAT;
var auth = Buffer.from(":" + pat).toString("base64");
var lookbackDays = parseInt(process.argv[2]) || 14;
var minFlipCount = parseInt(process.argv[3]) || 3;
function apiGet(path, callback) {
var options = {
hostname: "dev.azure.com",
path: "/" + org + "/" + project + "/_apis" + path,
method: "GET",
headers: { "Authorization": "Basic " + auth, "Accept": "application/json" }
};
var req = https.request(options, function(res) {
var data = "";
res.on("data", function(chunk) { data += chunk; });
res.on("end", function() {
var parsed;
try { parsed = JSON.parse(data); } catch (e) { parsed = { value: [] }; }
callback(null, parsed);
});
});
req.on("error", function(err) { callback(err); });
req.end();
}
var minDate = new Date();
minDate.setDate(minDate.getDate() - lookbackDays);
// Get all test runs in the lookback period
apiGet("/test/runs?minLastUpdatedDate=" + minDate.toISOString() +
"&$top=100&api-version=7.1", function(err, runsData) {
if (err) return console.error("Error:", err.message);
var runs = runsData.value || [];
console.log("Analyzing " + runs.length + " test runs from last " + lookbackDays + " days...");
var testHistory = {}; // testName -> [outcomes]
var completed = 0;
if (runs.length === 0) {
console.log("No test runs found.");
return;
}
runs.forEach(function(run) {
apiGet("/test/runs/" + run.id + "/results?$top=5000&api-version=7.1", function(err, resultData) {
completed++;
if (!err) {
(resultData.value || []).forEach(function(result) {
var name = result.automatedTestName || result.testCaseTitle;
if (!testHistory[name]) testHistory[name] = [];
testHistory[name].push(result.outcome);
});
}
if (completed === runs.length) {
analyzeFlakiness(testHistory);
}
});
});
});
function analyzeFlakiness(testHistory) {
var flakyTests = [];
Object.keys(testHistory).forEach(function(testName) {
var outcomes = testHistory[testName];
if (outcomes.length < 3) return; // Need enough history
var flipCount = 0;
for (var i = 1; i < outcomes.length; i++) {
if (outcomes[i] !== outcomes[i - 1]) {
flipCount++;
}
}
if (flipCount >= minFlipCount) {
var passCount = outcomes.filter(function(o) { return o === "Passed"; }).length;
var failCount = outcomes.filter(function(o) { return o === "Failed"; }).length;
flakyTests.push({
name: testName,
flips: flipCount,
runs: outcomes.length,
passRate: (passCount / outcomes.length * 100).toFixed(1),
recentOutcomes: outcomes.slice(-5).join(", ")
});
}
});
flakyTests.sort(function(a, b) { return b.flips - a.flips; });
console.log("");
console.log("Flaky Test Report");
console.log("=================");
console.log("Lookback: " + lookbackDays + " days | Min flips: " + minFlipCount);
console.log("Total unique tests analyzed: " + Object.keys(testHistory).length);
console.log("Flaky tests found: " + flakyTests.length);
console.log("");
if (flakyTests.length === 0) {
console.log("No flaky tests detected. Nice!");
return;
}
flakyTests.forEach(function(test) {
console.log(" " + test.name);
console.log(" Flips: " + test.flips + " in " + test.runs + " runs");
console.log(" Pass rate: " + test.passRate + "%");
console.log(" Recent: " + test.recentOutcomes);
console.log("");
});
}
node detect-flaky.js 14 3
# Output:
# Analyzing 28 test runs from last 14 days...
#
# Flaky Test Report
# =================
# Lookback: 14 days | Min flips: 3
# Total unique tests analyzed: 342
# Flaky tests found: 5
#
# UserService.createUser_withDuplicateEmail_throwsConflict
# Flips: 7 in 28 runs
# Pass rate: 60.7%
# Recent: Failed, Passed, Failed, Passed, Failed
#
# OrderController.submitOrder_withPayment_redirectsToConfirmation
# Flips: 4 in 28 runs
# Pass rate: 78.6%
# Recent: Passed, Passed, Failed, Passed, Passed
Building Custom Dashboards
Test Pass Rate Over Time
// pass-rate-trend.js -- Generate pass rate trend data
var https = require("https");
var org = process.env.AZURE_DEVOPS_ORG || "my-organization";
var project = process.env.AZURE_DEVOPS_PROJECT || "my-project";
var pat = process.env.AZURE_DEVOPS_PAT;
var auth = Buffer.from(":" + pat).toString("base64");
var days = parseInt(process.argv[2]) || 30;
function apiGet(path, callback) {
var options = {
hostname: "dev.azure.com",
path: "/" + org + "/" + project + "/_apis" + path,
method: "GET",
headers: { "Authorization": "Basic " + auth, "Accept": "application/json" }
};
var req = https.request(options, function(res) {
var data = "";
res.on("data", function(chunk) { data += chunk; });
res.on("end", function() { callback(null, JSON.parse(data)); });
});
req.on("error", function(err) { callback(err); });
req.end();
}
var minDate = new Date();
minDate.setDate(minDate.getDate() - days);
apiGet("/test/runs?minLastUpdatedDate=" + minDate.toISOString() +
"&$top=200&api-version=7.1", function(err, data) {
if (err) return console.error("Error:", err.message);
var dailyStats = {};
(data.value || []).forEach(function(run) {
if (run.state !== "Completed" || run.totalTests === 0) return;
var dateKey = new Date(run.completedDate).toISOString().split("T")[0];
if (!dailyStats[dateKey]) {
dailyStats[dateKey] = { total: 0, passed: 0, failed: 0, runs: 0 };
}
dailyStats[dateKey].total += run.totalTests;
dailyStats[dateKey].passed += run.passedTests;
dailyStats[dateKey].failed += run.unanalyzedTests;
dailyStats[dateKey].runs++;
});
console.log("Test Pass Rate Trend (last " + days + " days)");
console.log("==========================================");
console.log("");
console.log("Date | Runs | Total | Passed | Failed | Rate");
console.log("------------|------|--------|--------|--------|------");
var dates = Object.keys(dailyStats).sort();
dates.forEach(function(date) {
var s = dailyStats[date];
var rate = (s.passed / s.total * 100).toFixed(1);
var bar = rate >= 95 ? "OK" : rate >= 90 ? "WARN" : "FAIL";
console.log(date + " | " +
String(s.runs).padStart(4) + " | " +
String(s.total).padStart(6) + " | " +
String(s.passed).padStart(6) + " | " +
String(s.failed).padStart(6) + " | " +
rate + "% " + bar);
});
});
node pass-rate-trend.js 14
# Output:
# Test Pass Rate Trend (last 14 days)
# ==========================================
#
# Date | Runs | Total | Passed | Failed | Rate
# ------------|------|--------|--------|--------|------
# 2026-01-27 | 4 | 684 | 680 | 4 | 99.4% OK
# 2026-01-28 | 6 | 1026 | 1014 | 12 | 98.8% OK
# 2026-01-29 | 5 | 855 | 830 | 25 | 97.1% OK
# 2026-01-30 | 3 | 513 | 498 | 15 | 97.1% OK
# 2026-02-03 | 8 | 1368 | 1290 | 78 | 94.3% WARN
# 2026-02-04 | 7 | 1197 | 1170 | 27 | 97.7% OK
Test Duration Regression Detection
// duration-regression.js -- Find tests that are getting slower
var https = require("https");
var org = process.env.AZURE_DEVOPS_ORG || "my-organization";
var project = process.env.AZURE_DEVOPS_PROJECT || "my-project";
var pat = process.env.AZURE_DEVOPS_PAT;
var auth = Buffer.from(":" + pat).toString("base64");
function apiGet(path, callback) {
var options = {
hostname: "dev.azure.com",
path: "/" + org + "/" + project + "/_apis" + path,
method: "GET",
headers: { "Authorization": "Basic " + auth, "Accept": "application/json" }
};
var req = https.request(options, function(res) {
var data = "";
res.on("data", function(chunk) { data += chunk; });
res.on("end", function() { callback(null, JSON.parse(data)); });
});
req.on("error", function(err) { callback(err); });
req.end();
}
var runId1 = process.argv[2]; // older run
var runId2 = process.argv[3]; // newer run
var thresholdPct = parseFloat(process.argv[4]) || 50; // alert if >50% slower
if (!runId1 || !runId2) {
console.error("Usage: node duration-regression.js <oldRunId> <newRunId> [thresholdPercent]");
process.exit(1);
}
function getResults(runId, callback) {
apiGet("/test/runs/" + runId + "/results?$top=5000&api-version=7.1", function(err, data) {
if (err) return callback(err);
var map = {};
(data.value || []).forEach(function(r) {
var name = r.automatedTestName || r.testCaseTitle;
map[name] = { duration: r.durationInMs, outcome: r.outcome };
});
callback(null, map);
});
}
getResults(runId1, function(err, oldResults) {
if (err) return console.error("Error:", err.message);
getResults(runId2, function(err, newResults) {
if (err) return console.error("Error:", err.message);
var regressions = [];
Object.keys(newResults).forEach(function(name) {
if (!oldResults[name]) return;
var oldDuration = oldResults[name].duration;
var newDuration = newResults[name].duration;
if (oldDuration === 0) return;
var changePct = ((newDuration - oldDuration) / oldDuration * 100);
if (changePct > thresholdPct) {
regressions.push({
name: name,
oldMs: oldDuration,
newMs: newDuration,
changePct: changePct.toFixed(1)
});
}
});
regressions.sort(function(a, b) { return parseFloat(b.changePct) - parseFloat(a.changePct); });
console.log("Duration Regression Report");
console.log("=========================");
console.log("Comparing run #" + runId1 + " -> #" + runId2);
console.log("Threshold: >" + thresholdPct + "% slower");
console.log("Tests compared: " + Object.keys(newResults).length);
console.log("Regressions found: " + regressions.length);
console.log("");
regressions.forEach(function(r) {
console.log(" " + r.name);
console.log(" " + r.oldMs + "ms -> " + r.newMs + "ms (+" + r.changePct + "%)");
});
});
});
OData and Power BI Integration
Azure DevOps exposes test data through an OData feed that Power BI can consume directly.
OData Endpoint
The Analytics OData endpoint is:
https://analytics.dev.azure.com/{org}/{project}/_odata/v4.0-preview/TestResultsDaily
Power BI Connection
- Open Power BI Desktop
- Click Get Data > OData Feed
- Enter the Analytics URL
- Authenticate with your Azure DevOps credentials
- Select tables:
TestResultsDaily,TestRuns,TestPoints - Build visualizations
OData Query Examples
Filter test results for the last 30 days:
https://analytics.dev.azure.com/my-org/my-project/_odata/v4.0-preview/TestResultsDaily?
$filter=DateSK ge 20260110&
$select=DateSK,TotalCount,ResultPassCount,ResultFailCount,ResultNotExecutedCount&
$orderby=DateSK desc
Get pass rate by test suite:
https://analytics.dev.azure.com/my-org/my-project/_odata/v4.0-preview/TestResultsDaily?
$apply=groupby((TestSuite/TestSuiteName),aggregate(
TotalCount with sum as TotalTests,
ResultPassCount with sum as PassedTests))&
$orderby=TotalTests desc
Complete Working Example
A comprehensive test quality dashboard script that generates a full report:
// test-quality-report.js -- Complete test quality dashboard
var https = require("https");
var org = process.env.AZURE_DEVOPS_ORG || "my-organization";
var project = process.env.AZURE_DEVOPS_PROJECT || "my-project";
var pat = process.env.AZURE_DEVOPS_PAT;
var auth = Buffer.from(":" + pat).toString("base64");
function apiGet(path, callback) {
var options = {
hostname: "dev.azure.com",
path: "/" + org + "/" + project + "/_apis" + path,
method: "GET",
headers: { "Authorization": "Basic " + auth, "Accept": "application/json" }
};
var req = https.request(options, function(res) {
var data = "";
res.on("data", function(chunk) { data += chunk; });
res.on("end", function() {
try { callback(null, JSON.parse(data)); }
catch (e) { callback(new Error("Parse error")); }
});
});
req.on("error", function(err) { callback(err); });
req.end();
}
var days = parseInt(process.argv[2]) || 7;
var minDate = new Date();
minDate.setDate(minDate.getDate() - days);
console.log("===========================================");
console.log(" Test Quality Report");
console.log(" " + org + "/" + project);
console.log(" Period: last " + days + " days");
console.log(" Generated: " + new Date().toISOString().split("T")[0]);
console.log("===========================================");
console.log("");
apiGet("/test/runs?minLastUpdatedDate=" + minDate.toISOString() +
"&$top=200&api-version=7.1", function(err, runsData) {
if (err) return console.error("Error:", err.message);
var runs = (runsData.value || []).filter(function(r) {
return r.state === "Completed" && r.totalTests > 0;
});
// Section 1: Summary
var totalTests = 0;
var totalPassed = 0;
var totalFailed = 0;
runs.forEach(function(r) {
totalTests += r.totalTests;
totalPassed += r.passedTests;
totalFailed += r.unanalyzedTests;
});
var overallPassRate = totalTests > 0 ? (totalPassed / totalTests * 100).toFixed(1) : "N/A";
console.log("1. SUMMARY");
console.log(" Test runs: " + runs.length);
console.log(" Total test executions: " + totalTests);
console.log(" Passed: " + totalPassed);
console.log(" Failed: " + totalFailed);
console.log(" Overall pass rate: " + overallPassRate + "%");
console.log("");
// Section 2: Daily trend
var dailyStats = {};
runs.forEach(function(r) {
var date = new Date(r.completedDate).toISOString().split("T")[0];
if (!dailyStats[date]) dailyStats[date] = { total: 0, passed: 0, failed: 0 };
dailyStats[date].total += r.totalTests;
dailyStats[date].passed += r.passedTests;
dailyStats[date].failed += r.unanalyzedTests;
});
console.log("2. DAILY TREND");
Object.keys(dailyStats).sort().forEach(function(date) {
var s = dailyStats[date];
var rate = (s.passed / s.total * 100).toFixed(1);
var indicator = rate >= 95 ? "[OK] " : rate >= 90 ? "[WARN]" : "[FAIL]";
console.log(" " + date + " " + indicator + " " + rate + "% (" + s.failed + " failures)");
});
console.log("");
// Section 3: Top failures (analyze latest run)
if (runs.length > 0) {
var latestRun = runs.sort(function(a, b) {
return new Date(b.completedDate) - new Date(a.completedDate);
})[0];
apiGet("/test/runs/" + latestRun.id + "/results?outcomes=Failed&$top=10&api-version=7.1",
function(err, failData) {
console.log("3. TOP FAILURES (from latest run #" + latestRun.id + ")");
if (err || !failData.value || failData.value.length === 0) {
console.log(" No failures in latest run.");
} else {
failData.value.forEach(function(f, i) {
console.log(" " + (i + 1) + ". " + (f.testCaseTitle || f.automatedTestName));
console.log(" Error: " + (f.errorMessage || "No message").substring(0, 120));
console.log(" Duration: " + f.durationInMs + "ms");
});
}
console.log("");
console.log("===========================================");
console.log(" Report complete.");
});
}
});
node test-quality-report.js 7
# Output:
# ===========================================
# Test Quality Report
# my-organization/my-project
# Period: last 7 days
# Generated: 2026-02-09
# ===========================================
#
# 1. SUMMARY
# Test runs: 18
# Total test executions: 3078
# Passed: 3012
# Failed: 66
# Overall pass rate: 97.9%
#
# 2. DAILY TREND
# 2026-02-03 [WARN] 94.3% (78 failures)
# 2026-02-04 [OK] 97.7% (27 failures)
# 2026-02-05 [OK] 98.2% (21 failures)
# 2026-02-06 [OK] 99.1% (10 failures)
# 2026-02-07 [OK] 98.5% (18 failures)
#
# 3. TOP FAILURES (from latest run #4567)
# 1. UserService.createUser_withDuplicateEmail_throwsConflict
# Error: Expected ConflictException but got TimeoutException
# Duration: 5023ms
# 2. OrderController.submitOrder_withPayment_redirectsToConfirmation
# Error: Element not found: #confirm-button (timeout 10s)
# Duration: 10340ms
Common Issues and Troubleshooting
1. Analytics Widgets Show No Data
Error: Dashboard widgets display "No data available" despite having test results.
Analytics data takes up to 24 hours to populate after test results are published. Verify test results are being published by checking the Tests tab on individual pipeline runs. Also ensure Analytics is enabled in Organization Settings > Extensions.
2. OData Query Returns 401
Error: Power BI or API queries to the Analytics OData endpoint return 401 Unauthorized.
The PAT needs the Analytics (Read) scope. Create a new PAT with this scope. For Power BI, re-authenticate the connection with the updated credentials.
3. Test Results Missing from Trend Reports
Error: Certain pipeline runs do not appear in trend widgets.
The pipeline must use the PublishTestResults@2 task for results to appear in Analytics. Script-based test execution without the publish task does not report to Analytics. Also check that the widget filter matches the correct pipeline definition.
4. Flaky Test Detection Not Working
Error: Azure DevOps does not flag known flaky tests.
Flaky test detection requires at least 3-5 runs with alternating outcomes to trigger. New flaky tests are not flagged immediately. Verify the feature is enabled in Project Settings > Pipelines > Test Management.
5. Pass Rate Calculation Differs from Widget
Error: Your custom API query shows a different pass rate than the dashboard widget.
The widget may include "Not Executed" and "Not Impacted" tests in the total, while your query may count only "Passed" and "Failed." Align your calculations by including all outcomes: totalTests = passed + failed + notExecuted + other.
Best Practices
Review test trends weekly, not just per-build. A single failed test is noise. A downward trend over two weeks is a signal. Set up a weekly dashboard review cadence.
Track flaky tests as first-class defects. Create work items for flaky tests and prioritize fixing them. A flaky test that fails 20% of the time wastes 20% of your build queue capacity.
Set pass rate targets per quality level. Unit tests: 99%+. Integration tests: 97%+. E2E tests: 95%+. Lower pass rates indicate infrastructure instability or flakiness.
Use duration trends to catch performance regressions. A test that goes from 100ms to 500ms is a performance regression even if it still passes. Track test duration alongside outcomes.
Automate quality reports as pipeline artifacts. Run the reporting scripts in a scheduled pipeline and publish the output. This creates an audit trail and makes reports available without manual effort.
Correlate test failures with code changes. When a test starts failing, check which commits were included in that build. Azure DevOps links pipeline runs to commits, making this investigation straightforward.
Archive historical data for long-term trend analysis. Azure DevOps Analytics retains data for a limited period. Export trend data periodically to a data warehouse for multi-quarter analysis.
Create separate dashboards for different audiences. Developers need per-test failure details. Managers need pass rate trends. QA leads need flaky test reports. One dashboard cannot serve all audiences.