Selenium Integration with Azure DevOps
Run Selenium browser tests in Azure Pipelines with headless Chrome, cross-browser matrices, and automated test result publishing
Selenium Integration with Azure DevOps
Selenium remains the most battle-tested browser automation framework in production use today. When paired with Azure DevOps Pipelines, you get a robust end-to-end testing pipeline that catches UI regressions before they reach your users. This article covers the full integration path: from writing Selenium tests in Node.js to running them headless in CI, capturing screenshots on failure, publishing JUnit results, and scaling across browsers with matrix strategies.
Prerequisites
- Node.js 18+ installed locally
- An Azure DevOps organization with a project configured
- Basic familiarity with Azure Pipelines YAML syntax
- Chrome or Chromium installed (for local development)
- A web application to test (we will use a simple Express app)
Selenium WebDriver with Node.js Setup
Start by initializing your project and installing the core dependencies:
npm init -y
npm install --save-dev selenium-webdriver chromedriver mocha chai
npm install --save-dev mocha-junit-reporter
The selenium-webdriver package is the official JavaScript binding. chromedriver provides the ChromeDriver binary that Selenium needs to control Chrome. We use Mocha as the test runner and mocha-junit-reporter to produce JUnit XML that Azure DevOps can consume.
Your package.json scripts section should look like this:
{
"scripts": {
"test:selenium": "mocha --timeout 30000 --reporter mocha-junit-reporter --reporter-options mochaFile=./test-results/selenium-results.xml tests/selenium/**/*.test.js",
"test:selenium:local": "mocha --timeout 30000 tests/selenium/**/*.test.js"
}
}
The timeout is critical. Browser tests are inherently slower than unit tests, and 30 seconds gives each test case enough room to complete without masking actual failures.
Writing Browser Tests with selenium-webdriver
Here is a basic test that navigates to a page and verifies the title:
var webdriver = require("selenium-webdriver");
var chrome = require("selenium-webdriver/chrome");
var assert = require("chai").assert;
describe("Homepage", function () {
var driver;
before(function () {
var options = new chrome.Options();
if (process.env.CI) {
options.addArguments("--headless=new");
options.addArguments("--no-sandbox");
options.addArguments("--disable-dev-shm-usage");
options.addArguments("--disable-gpu");
options.addArguments("--window-size=1920,1080");
}
driver = new webdriver.Builder()
.forBrowser("chrome")
.setChromeOptions(options)
.build();
});
after(function () {
return driver.quit();
});
it("should display the correct page title", function () {
return driver.get("http://localhost:3000").then(function () {
return driver.getTitle();
}).then(function (title) {
assert.equal(title, "My Application");
});
});
it("should show the navigation menu", function () {
return driver.get("http://localhost:3000").then(function () {
return driver.findElement(webdriver.By.css("nav.main-nav"));
}).then(function (nav) {
return nav.isDisplayed();
}).then(function (visible) {
assert.isTrue(visible);
});
});
});
Note the conditional headless flags. Locally you want to see the browser. In CI you need headless mode. The --no-sandbox and --disable-dev-shm-usage flags are non-negotiable in containerized CI environments — without them Chrome will crash silently.
Configuring Headless Chrome for CI
The three flags that matter most in CI are:
| Flag | Purpose |
|---|---|
--headless=new |
Runs Chrome without a visible window (new headless mode is more stable) |
--no-sandbox |
Disables sandboxing — required when running as root in containers |
--disable-dev-shm-usage |
Uses /tmp instead of /dev/shm which is often too small in Docker |
I wrap the driver creation in a helper module so every test file does not repeat this logic:
// tests/selenium/helpers/driverFactory.js
var webdriver = require("selenium-webdriver");
var chrome = require("selenium-webdriver/chrome");
var firefox = require("selenium-webdriver/firefox");
function createDriver(browserName) {
var builder = new webdriver.Builder();
browserName = browserName || process.env.BROWSER || "chrome";
if (browserName === "chrome") {
var chromeOptions = new chrome.Options();
if (process.env.CI) {
chromeOptions.addArguments("--headless=new");
chromeOptions.addArguments("--no-sandbox");
chromeOptions.addArguments("--disable-dev-shm-usage");
chromeOptions.addArguments("--disable-gpu");
chromeOptions.addArguments("--window-size=1920,1080");
}
builder.setChromeOptions(chromeOptions);
}
if (browserName === "firefox") {
var firefoxOptions = new firefox.Options();
if (process.env.CI) {
firefoxOptions.addArguments("--headless");
firefoxOptions.addArguments("--width=1920");
firefoxOptions.addArguments("--height=1080");
}
builder.setFirefoxOptions(firefoxOptions);
}
return builder.forBrowser(browserName).build();
}
module.exports = { createDriver: createDriver };
This factory pattern lets you switch browsers with a single environment variable, which is exactly what the pipeline matrix strategy will use.
Azure Pipeline Configuration for Selenium
Here is a pipeline YAML that installs dependencies, starts the app, runs the Selenium tests, and publishes results:
trigger:
- master
pool:
vmImage: "ubuntu-latest"
steps:
- task: NodeTool@0
inputs:
versionSpec: "20.x"
displayName: "Install Node.js"
- script: npm ci
displayName: "Install dependencies"
- script: |
npm start &
sleep 5
displayName: "Start application"
env:
NODE_ENV: test
PORT: 3000
- script: npm run test:selenium
displayName: "Run Selenium tests"
env:
CI: true
BASE_URL: "http://localhost:3000"
- task: PublishTestResults@2
condition: always()
inputs:
testResultsFormat: "JUnit"
testResultsFiles: "**/selenium-results.xml"
testRunTitle: "Selenium Browser Tests"
mergeTestResults: true
displayName: "Publish test results"
A few details worth calling out. The npm start & runs the app in the background so the tests can hit it. The sleep 5 gives the server time to bind to the port. The condition: always() on the publish step ensures results are published even when tests fail — this is essential because failed test runs are exactly when you need the results most.
Publishing Selenium Test Results
The PublishTestResults@2 task reads JUnit XML and creates test runs in Azure DevOps. You can view individual test cases, their duration, and pass/fail status directly in the pipeline summary.
For richer reporting, configure multiple reporter outputs:
{
"scripts": {
"test:selenium": "mocha --timeout 30000 --reporter mocha-multi-reporters --reporter-options configFile=reporter-config.json tests/selenium/**/*.test.js"
}
}
// reporter-config.json
{
"reporterEnabled": "spec, mocha-junit-reporter",
"mochaJunitReporterReporterOptions": {
"mochaFile": "./test-results/selenium-results.xml",
"attachments": true,
"outputs": true
}
}
Install the multi-reporter package:
npm install --save-dev mocha-multi-reporters
This gives you console output during the run and JUnit XML for Azure DevOps.
Screenshot Capture on Failure
Screenshots are the single most valuable artifact in browser test debugging. When a test fails in CI, you cannot see what the browser showed. A screenshot tells you immediately whether you are dealing with a missing element, a layout break, or a loading error.
// tests/selenium/helpers/screenshotHelper.js
var fs = require("fs");
var path = require("path");
var SCREENSHOT_DIR = path.join(__dirname, "..", "..", "..", "test-results", "screenshots");
function ensureDir(dir) {
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
}
function captureScreenshot(driver, testName) {
ensureDir(SCREENSHOT_DIR);
var sanitized = testName.replace(/[^a-zA-Z0-9-_]/g, "_");
var filename = sanitized + "_" + Date.now() + ".png";
var filepath = path.join(SCREENSHOT_DIR, filename);
return driver.takeScreenshot().then(function (data) {
fs.writeFileSync(filepath, data, "base64");
console.log("Screenshot saved: " + filepath);
return filepath;
});
}
module.exports = { captureScreenshot: captureScreenshot };
Hook it into your test lifecycle with an afterEach:
var screenshotHelper = require("./helpers/screenshotHelper");
afterEach(function () {
var test = this.currentTest;
if (test.state === "failed") {
return screenshotHelper.captureScreenshot(driver, test.fullTitle());
}
});
Then publish the screenshots as pipeline artifacts:
- task: PublishPipelineArtifact@1
condition: always()
inputs:
targetPath: "test-results/screenshots"
artifactName: "selenium-screenshots"
displayName: "Publish failure screenshots"
Now every failed test produces a screenshot you can download directly from the pipeline run.
Selenium Grid with Docker in Pipelines
For environments where you need multiple browser versions or want to parallelize execution, Selenium Grid is the way to go. Azure Pipelines can run Docker containers as services:
resources:
containers:
- container: selenium-hub
image: selenium/hub:4.15
ports:
- 4442:4442
- 4443:4443
- 4444:4444
- container: chrome-node
image: selenium/node-chrome:4.15
env:
SE_EVENT_BUS_HOST: selenium-hub
SE_EVENT_BUS_PUBLISH_PORT: 4442
SE_EVENT_BUS_SUBSCRIBE_PORT: 4443
services:
selenium-hub: selenium-hub
chrome-node: chrome-node
Point your tests at the Grid hub instead of a local browser:
function createRemoteDriver(browserName) {
var gridUrl = process.env.SELENIUM_GRID_URL || "http://localhost:4444/wd/hub";
return new webdriver.Builder()
.usingServer(gridUrl)
.forBrowser(browserName || "chrome")
.build();
}
The Grid approach scales better than local drivers when you need to run dozens of browser tests in parallel.
Cross-Browser Testing Matrix
Azure Pipelines matrix strategies let you run the same tests across multiple browsers in parallel:
strategy:
matrix:
Chrome:
BROWSER: "chrome"
DRIVER_PKG: "chromedriver"
Firefox:
BROWSER: "firefox"
DRIVER_PKG: "geckodriver"
steps:
- task: NodeTool@0
inputs:
versionSpec: "20.x"
- script: |
npm ci
npm install --save-dev $(DRIVER_PKG)
displayName: "Install dependencies"
- script: npm run test:selenium
env:
CI: true
BROWSER: $(BROWSER)
displayName: "Run Selenium tests - $(BROWSER)"
- task: PublishTestResults@2
condition: always()
inputs:
testResultsFormat: "JUnit"
testResultsFiles: "**/selenium-results.xml"
testRunTitle: "Selenium - $(BROWSER)"
Each matrix leg runs as a separate job, so Chrome and Firefox tests execute simultaneously. The BROWSER environment variable feeds into the driver factory shown earlier.
Page Object Model Pattern
Raw Selenium calls scattered across test files become unmaintainable fast. The Page Object Model (POM) encapsulates page structure and interactions in dedicated classes:
// tests/selenium/pages/LoginPage.js
var webdriver = require("selenium-webdriver");
var By = webdriver.By;
var until = webdriver.until;
function LoginPage(driver) {
this.driver = driver;
this.url = (process.env.BASE_URL || "http://localhost:3000") + "/login";
this.selectors = {
emailInput: By.id("email"),
passwordInput: By.id("password"),
submitButton: By.css("button[type='submit']"),
errorMessage: By.css(".alert-danger"),
welcomeMessage: By.css(".welcome-text")
};
}
LoginPage.prototype.navigate = function () {
return this.driver.get(this.url);
};
LoginPage.prototype.enterEmail = function (email) {
var el = this.driver.findElement(this.selectors.emailInput);
return el.clear().then(function () {
return el.sendKeys(email);
});
};
LoginPage.prototype.enterPassword = function (password) {
var el = this.driver.findElement(this.selectors.passwordInput);
return el.clear().then(function () {
return el.sendKeys(password);
});
};
LoginPage.prototype.clickSubmit = function () {
return this.driver.findElement(this.selectors.submitButton).click();
};
LoginPage.prototype.login = function (email, password) {
var self = this;
return self.navigate()
.then(function () { return self.enterEmail(email); })
.then(function () { return self.enterPassword(password); })
.then(function () { return self.clickSubmit(); });
};
LoginPage.prototype.getErrorMessage = function () {
return this.driver.wait(
until.elementLocated(this.selectors.errorMessage),
5000
).then(function (el) {
return el.getText();
});
};
LoginPage.prototype.getWelcomeMessage = function () {
return this.driver.wait(
until.elementLocated(this.selectors.welcomeMessage),
5000
).then(function (el) {
return el.getText();
});
};
module.exports = LoginPage;
Using it in a test:
var assert = require("chai").assert;
var driverFactory = require("./helpers/driverFactory");
var LoginPage = require("./pages/LoginPage");
describe("Login Page", function () {
var driver;
var loginPage;
before(function () {
driver = driverFactory.createDriver();
loginPage = new LoginPage(driver);
});
after(function () {
return driver.quit();
});
it("should reject invalid credentials", function () {
return loginPage.login("[email protected]", "wrongpassword")
.then(function () {
return loginPage.getErrorMessage();
})
.then(function (text) {
assert.include(text, "Invalid");
});
});
it("should accept valid credentials", function () {
return loginPage.login("[email protected]", "correctpassword")
.then(function () {
return loginPage.getWelcomeMessage();
})
.then(function (text) {
assert.include(text, "Welcome");
});
});
});
Page objects keep your selectors in one place. When the login form changes, you update one file instead of hunting through fifty test cases.
Wait Strategies and Flaky Test Prevention
Flaky tests are the number one reason teams abandon browser testing. Almost every flaky Selenium test comes down to timing — the test tried to interact with an element before the page finished rendering.
Explicit waits are mandatory. Never use implicit waits or sleep():
var webdriver = require("selenium-webdriver");
var until = webdriver.until;
var By = webdriver.By;
// Bad - arbitrary sleep
function badWait(driver) {
return driver.sleep(3000).then(function () {
return driver.findElement(By.id("results"));
});
}
// Good - explicit wait for condition
function goodWait(driver) {
return driver.wait(
until.elementLocated(By.id("results")),
10000,
"Results container did not appear within 10 seconds"
);
}
// Better - wait for element to be visible, not just in DOM
function betterWait(driver) {
var locator = By.id("results");
return driver.wait(until.elementLocated(locator), 10000)
.then(function (element) {
return driver.wait(until.elementIsVisible(element), 5000);
});
}
// Best - custom wait condition for dynamic content
function waitForResultCount(driver, expectedCount) {
return driver.wait(function () {
return driver.findElements(By.css(".result-item"))
.then(function (elements) {
return elements.length >= expectedCount;
});
}, 15000, "Expected at least " + expectedCount + " results");
}
Other anti-flakiness strategies:
- Retry failed tests once. Mocha supports
this.retries(1)— use it sparingly and only for tests that interact with external services. - Reset state between tests. Clear cookies, local storage, and navigate to a clean starting point.
- Avoid testing animations. Disable CSS transitions in your test environment with a global stylesheet override.
- Use unique test data. Timestamp your test inputs so parallel runs never collide.
// Disable animations in test environment
function disableAnimations(driver) {
var css = "*, *::before, *::after { " +
"transition-duration: 0s !important; " +
"animation-duration: 0s !important; " +
"transition-delay: 0s !important; }";
return driver.executeScript(
"var style = document.createElement('style');" +
"style.textContent = arguments[0];" +
"document.head.appendChild(style);",
css
);
}
Parallel Browser Test Execution
Running all your tests sequentially is slow. There are two levels of parallelism you can exploit.
Test-level parallelism with Mocha's --parallel flag:
{
"scripts": {
"test:selenium:parallel": "mocha --parallel --jobs 4 --timeout 30000 tests/selenium/**/*.test.js"
}
}
Each worker gets its own browser instance. Make sure your tests are fully independent — no shared state, no sequential dependencies.
Pipeline-level parallelism with Azure DevOps matrix or multiple jobs:
strategy:
matrix:
Suite_Auth:
TEST_SUITE: "tests/selenium/auth/**/*.test.js"
Suite_Dashboard:
TEST_SUITE: "tests/selenium/dashboard/**/*.test.js"
Suite_Checkout:
TEST_SUITE: "tests/selenium/checkout/**/*.test.js"
steps:
- script: |
mocha --timeout 30000 \
--reporter mocha-junit-reporter \
--reporter-options mochaFile=./test-results/results.xml \
$(TEST_SUITE)
env:
CI: true
displayName: "Run $(TEST_SUITE)"
This splits your test suites across separate pipeline agents, cutting total execution time proportionally.
Integrating Results with Azure Test Plans
If your organization uses Azure Test Plans for manual and automated test tracking, you can associate Selenium tests with test cases:
- task: PublishTestResults@2
inputs:
testResultsFormat: "JUnit"
testResultsFiles: "**/selenium-results.xml"
testRunTitle: "Selenium Automated Tests"
buildPlatform: "Linux"
buildConfiguration: "Chrome-Headless"
mergeTestResults: true
You can also use the Azure DevOps REST API to programmatically associate test results with specific test case work items:
// scripts/publish-test-association.js
var https = require("https");
function associateTestResult(testCaseId, testRunId, resultId, pat) {
var org = process.env.AZURE_ORG;
var project = process.env.AZURE_PROJECT;
var options = {
hostname: "dev.azure.com",
path: "/" + org + "/" + project + "/_apis/test/runs/" + testRunId + "/results/" + resultId + "?api-version=7.1",
method: "PATCH",
headers: {
"Content-Type": "application/json",
"Authorization": "Basic " + Buffer.from(":" + pat).toString("base64")
}
};
var body = JSON.stringify([{
testCase: { id: testCaseId },
automatedTestName: "selenium.login.validCredentials",
outcome: "Passed"
}]);
var req = https.request(options, function (res) {
console.log("Association status: " + res.statusCode);
});
req.write(body);
req.end();
}
This bridges your automated Selenium tests with the broader Test Plans tracking, giving QA managers visibility into automation coverage.
BrowserStack and Sauce Labs Integration
When you need to test against browsers and operating systems that Azure-hosted agents do not provide — Safari on macOS, Edge on Windows, mobile browsers — cloud browser services fill the gap.
BrowserStack integration:
// tests/selenium/helpers/cloudDriverFactory.js
var webdriver = require("selenium-webdriver");
function createBrowserStackDriver(capabilities) {
var username = process.env.BROWSERSTACK_USERNAME;
var accessKey = process.env.BROWSERSTACK_ACCESS_KEY;
var gridUrl = "https://" + username + ":" + accessKey +
"@hub-cloud.browserstack.com/wd/hub";
var caps = Object.assign({
"bstack:options": {
os: "Windows",
osVersion: "11",
buildName: process.env.BUILD_BUILDNUMBER || "local",
sessionName: "Selenium Test",
local: "false",
seleniumVersion: "4.15.0"
}
}, capabilities);
return new webdriver.Builder()
.usingServer(gridUrl)
.withCapabilities(caps)
.build();
}
module.exports = { createBrowserStackDriver: createBrowserStackDriver };
Sauce Labs integration:
function createSauceLabsDriver(capabilities) {
var username = process.env.SAUCE_USERNAME;
var accessKey = process.env.SAUCE_ACCESS_KEY;
var region = process.env.SAUCE_REGION || "us-west-1";
var gridUrl = "https://" + username + ":" + accessKey +
"@ondemand." + region + ".saucelabs.com/wd/hub";
var caps = Object.assign({
"sauce:options": {
build: process.env.BUILD_BUILDNUMBER || "local",
name: "Selenium Suite"
}
}, capabilities);
return new webdriver.Builder()
.usingServer(gridUrl)
.withCapabilities(caps)
.build();
}
Store credentials as Azure DevOps pipeline variables (marked as secret) and pass them as environment variables:
- script: npm run test:selenium
env:
BROWSERSTACK_USERNAME: $(BROWSERSTACK_USERNAME)
BROWSERSTACK_ACCESS_KEY: $(BROWSERSTACK_ACCESS_KEY)
USE_CLOUD_BROWSER: true
Complete Working Example
Here is a full Selenium test suite for a Node.js web application running in Azure Pipelines. The project structure:
project/
tests/
selenium/
helpers/
driverFactory.js
screenshotHelper.js
pages/
HomePage.js
SearchPage.js
home.test.js
search.test.js
test-results/
azure-pipelines.yml
package.json
reporter-config.json
tests/selenium/pages/HomePage.js:
var webdriver = require("selenium-webdriver");
var By = webdriver.By;
var until = webdriver.until;
function HomePage(driver) {
this.driver = driver;
this.baseUrl = process.env.BASE_URL || "http://localhost:3000";
this.selectors = {
heroTitle: By.css("h1.hero-title"),
searchInput: By.id("search-input"),
searchButton: By.id("search-btn"),
navLinks: By.css("nav a"),
featuredCards: By.css(".featured-card")
};
}
HomePage.prototype.navigate = function () {
return this.driver.get(this.baseUrl);
};
HomePage.prototype.getHeroTitle = function () {
return this.driver.wait(
until.elementLocated(this.selectors.heroTitle), 5000
).then(function (el) {
return el.getText();
});
};
HomePage.prototype.search = function (query) {
var self = this;
return self.driver.findElement(self.selectors.searchInput)
.then(function (input) {
return input.clear().then(function () {
return input.sendKeys(query);
});
})
.then(function () {
return self.driver.findElement(self.selectors.searchButton).click();
});
};
HomePage.prototype.getNavLinkCount = function () {
return this.driver.findElements(this.selectors.navLinks)
.then(function (links) {
return links.length;
});
};
HomePage.prototype.getFeaturedCardCount = function () {
return this.driver.wait(
until.elementsLocated(this.selectors.featuredCards), 5000
).then(function (cards) {
return cards.length;
});
};
module.exports = HomePage;
tests/selenium/pages/SearchPage.js:
var webdriver = require("selenium-webdriver");
var By = webdriver.By;
var until = webdriver.until;
function SearchPage(driver) {
this.driver = driver;
this.selectors = {
resultItems: By.css(".search-result"),
resultTitle: By.css(".search-result h3"),
noResults: By.css(".no-results-message"),
resultCount: By.id("result-count")
};
}
SearchPage.prototype.waitForResults = function () {
var self = this;
return this.driver.wait(function () {
return self.driver.findElements(self.selectors.resultItems)
.then(function (items) {
return items.length > 0;
});
}, 10000, "Search results did not load");
};
SearchPage.prototype.getResultCount = function () {
return this.driver.findElements(this.selectors.resultItems)
.then(function (items) {
return items.length;
});
};
SearchPage.prototype.getFirstResultTitle = function () {
return this.driver.findElement(this.selectors.resultTitle)
.then(function (el) {
return el.getText();
});
};
SearchPage.prototype.hasNoResultsMessage = function () {
return this.driver.findElements(this.selectors.noResults)
.then(function (elements) {
return elements.length > 0;
});
};
module.exports = SearchPage;
tests/selenium/home.test.js:
var assert = require("chai").assert;
var driverFactory = require("./helpers/driverFactory");
var screenshotHelper = require("./helpers/screenshotHelper");
var HomePage = require("./pages/HomePage");
describe("Homepage Tests", function () {
var driver;
var homePage;
before(function () {
driver = driverFactory.createDriver();
homePage = new HomePage(driver);
});
afterEach(function () {
var test = this.currentTest;
if (test.state === "failed") {
return screenshotHelper.captureScreenshot(driver, test.fullTitle());
}
});
after(function () {
return driver.quit();
});
it("should load the homepage successfully", function () {
return homePage.navigate().then(function () {
return homePage.getHeroTitle();
}).then(function (title) {
assert.isNotEmpty(title);
});
});
it("should display navigation links", function () {
return homePage.navigate().then(function () {
return homePage.getNavLinkCount();
}).then(function (count) {
assert.isAtLeast(count, 3, "Should have at least 3 nav links");
});
});
it("should show featured content cards", function () {
return homePage.navigate().then(function () {
return homePage.getFeaturedCardCount();
}).then(function (count) {
assert.isAtLeast(count, 1, "Should have at least 1 featured card");
});
});
it("should perform a search from the homepage", function () {
return homePage.navigate().then(function () {
return homePage.search("node.js tutorial");
}).then(function () {
return driver.getCurrentUrl();
}).then(function (url) {
assert.include(url, "search");
});
});
});
tests/selenium/search.test.js:
var assert = require("chai").assert;
var driverFactory = require("./helpers/driverFactory");
var screenshotHelper = require("./helpers/screenshotHelper");
var HomePage = require("./pages/HomePage");
var SearchPage = require("./pages/SearchPage");
describe("Search Feature Tests", function () {
var driver;
var homePage;
var searchPage;
before(function () {
driver = driverFactory.createDriver();
homePage = new HomePage(driver);
searchPage = new SearchPage(driver);
});
afterEach(function () {
var test = this.currentTest;
if (test.state === "failed") {
return screenshotHelper.captureScreenshot(driver, test.fullTitle());
}
});
after(function () {
return driver.quit();
});
it("should return results for valid search query", function () {
return homePage.navigate()
.then(function () { return homePage.search("javascript"); })
.then(function () { return searchPage.waitForResults(); })
.then(function () { return searchPage.getResultCount(); })
.then(function (count) {
assert.isAtLeast(count, 1);
});
});
it("should show no results message for garbage query", function () {
return homePage.navigate()
.then(function () { return homePage.search("zzzzxxxxxqqqq12345"); })
.then(function () { return driver.sleep(2000); })
.then(function () { return searchPage.hasNoResultsMessage(); })
.then(function (hasMessage) {
assert.isTrue(hasMessage, "Should show no results message");
});
});
it("should display result titles", function () {
return homePage.navigate()
.then(function () { return homePage.search("api"); })
.then(function () { return searchPage.waitForResults(); })
.then(function () { return searchPage.getFirstResultTitle(); })
.then(function (title) {
assert.isNotEmpty(title);
});
});
});
azure-pipelines.yml (complete):
trigger:
- master
variables:
BASE_URL: "http://localhost:3000"
strategy:
matrix:
Chrome:
BROWSER: "chrome"
Firefox:
BROWSER: "firefox"
pool:
vmImage: "ubuntu-latest"
steps:
- task: NodeTool@0
inputs:
versionSpec: "20.x"
displayName: "Install Node.js 20"
- script: |
npm ci
if [ "$(BROWSER)" = "firefox" ]; then
npm install --save-dev geckodriver
fi
displayName: "Install dependencies"
- script: |
npm start &
echo "Waiting for server..."
for i in $(seq 1 30); do
curl -s http://localhost:3000 > /dev/null && break
sleep 1
done
echo "Server is ready"
displayName: "Start application"
env:
NODE_ENV: test
PORT: 3000
- script: npm run test:selenium
displayName: "Run Selenium tests ($(BROWSER))"
env:
CI: true
BROWSER: $(BROWSER)
BASE_URL: $(BASE_URL)
- task: PublishTestResults@2
condition: always()
inputs:
testResultsFormat: "JUnit"
testResultsFiles: "**/selenium-results.xml"
testRunTitle: "Selenium - $(BROWSER)"
mergeTestResults: true
displayName: "Publish test results"
- task: PublishPipelineArtifact@1
condition: failed()
inputs:
targetPath: "test-results/screenshots"
artifactName: "failure-screenshots-$(BROWSER)"
displayName: "Publish failure screenshots"
This pipeline starts the app, waits for it to be healthy using a curl loop (more reliable than a flat sleep), runs the full test suite in both Chrome and Firefox, publishes JUnit results regardless of outcome, and saves screenshots only when tests fail.
Common Issues and Troubleshooting
1. Chrome crashes with "DevToolsActivePort file doesn't exist"
This happens in Docker containers and CI environments where /dev/shm is too small. Add --disable-dev-shm-usage to your Chrome options. If that alone does not fix it, also add --disable-extensions and ensure you are not running multiple Chrome instances that exhaust memory.
2. Element click intercepted: another element would receive the click
A floating header, modal backdrop, or cookie consent banner is covering the element. Scroll the element into view first, or use JavaScript click as a fallback:
function safeClick(driver, element) {
return element.click().catch(function () {
return driver.executeScript("arguments[0].click();", element);
});
}
3. Tests pass locally but fail in CI with timeout errors
CI machines are slower. Increase your explicit wait timeouts (not the Mocha timeout) and make sure you are waiting for the right conditions. Also check that the application under test has finished starting — race conditions between app startup and test execution are extremely common.
4. StaleElementReferenceException during dynamic page updates
The DOM element was found, then the page re-rendered and the reference became stale. Re-locate the element after the page update:
function waitAndRelocate(driver, locator, timeout) {
return driver.wait(until.elementLocated(locator), timeout)
.then(function () {
return driver.wait(until.stalenessOf(
driver.findElement(locator)
), 1000).catch(function () {
// Element is not stale, which is what we want
});
})
.then(function () {
return driver.findElement(locator);
});
}
5. Firefox GeckoDriver version mismatch
GeckoDriver versions must match the installed Firefox version. Pin your GeckoDriver version in package.json or use webdriver-manager to auto-detect. In Azure Pipelines, the ubuntu-latest image ships Firefox but the version changes monthly — always install a specific version if stability matters:
- script: |
sudo apt-get update
sudo apt-get install -y firefox=115.*
displayName: "Pin Firefox version"
6. Tests fail with "session not created: Chrome version must be between X and Y"
ChromeDriver and Chrome versions must match. Use the chromedriver npm package which auto-detects the installed Chrome version, or set CHROMEDRIVER_VERSION to pin it explicitly.
Best Practices
Always use explicit waits. Implicit waits and
sleep()are the root cause of most flaky tests. Wait for specific conditions — element visible, text present, URL changed.Capture screenshots on every failure. The two-second investment in a screenshot helper saves hours of CI debugging. Publish them as pipeline artifacts so they are always accessible.
Keep your Page Object Models thin. A page object should expose what you can do on a page, not how. Methods like
login(email, password)instead oftypeIntoField("#email", value).Run Selenium tests in a separate pipeline stage. Browser tests are slow. Run unit tests first and only trigger the Selenium stage if unit tests pass. This saves agent minutes and gives faster feedback.
Use environment variables for all configuration. Base URL, browser choice, grid URL, timeouts — none of these should be hardcoded. The same test code must run locally and in CI without modification.
Limit your Selenium test scope. Do not try to cover every edge case with browser tests. Use them for critical user flows — login, checkout, search, signup. Push edge case coverage down to unit and integration tests where execution is 100x faster.
Tag tests by priority and feature area. Use Mocha's
grepoption to run subsets of tests. Run smoke tests on every commit and the full suite on nightly builds.Clean up browser state between tests. Delete cookies, clear local storage, and navigate to a neutral page. Leftover state from a previous test is the second most common source of flakiness after timing issues.
Version-lock your browser drivers. Auto-detected driver versions will break when the CI image updates its browser. Pin specific versions and update deliberately through pull requests.
Monitor test execution time trends. A test suite that takes 2 minutes today and 8 minutes next month has a problem. Set pipeline timeout gates and investigate sudden jumps in duration.