This document discusses how to evaluate and improve Selenium tests, page objects, and locators. It provides rubrics to grade tests on a 100 point scale, deducting points for issues like using Selenium commands directly or hard-coded waits. Page objects are graded on lines of code, assertions, and waits. Locators are given letter grades based on attributes like being dynamic, tied to layout, or using semantic IDs. An example shows overall grades for a test suite, individual tests, and locators within tests. The document concludes by proposing implementing the rubrics as an IntelliJ plugin to automate evaluation and reporting.
3. What We’ll Cover
• Good/bad Tests, Testing Rubric
• Good/bad Page Objects, PO Rubric
• Good/Bad Locators, Locator Rubric
• Report Card
• A Concept For Automating This
4. Who, Me? Okay
• Marcus Merrell (@mmerrell)
• Manager, Web Platform Engineering Team at
RetailMeNot, inc
• QA Test Automation from 2001-2015
• Selenium/WebDriver since 2007
• The “bass player” in all things
6. What Makes a Bad Test?
• Fails For No Good Reason
• Changes in AUT, Need to Update Test(s)
• Slow
• Unreliable
• Hard to Understand and Maintain
7. What Makes a Good Test?
• Written for BDD or xUnit test framework
• Test one thing (atomic)
• Each test can be run independently (autonomous, “idempotent”)
• Anyone can understand what it is doing (easy to read)
• Similar tests are grouped together
• Centralized setup/teardown
• Uses Page Objects
9. A Testing Rubric
(Each test starts with 100 points, deduct as necessary)
Item Belong in tests? Score
Selenium Commands No -3 per (max of -9)
Locators No -2 per (max of -8)
Selenium setup/teardown* No -20
Hard-coded sleeps No -5 per (max of -20)
Implicit wait calls No -10
Explicit Wait calls No -3
Conditionals No -5 (max of -20)
Calls to Page Objects Yes N/A
Assertion(s)* Yes N/A
12. What Makes a Bad PO?
• Gigantic (i.e., hundreds or thousands of lines of
code)
• Over-reaching responsibility
• Contains overly complicated logic
• Return not enough or too much information to the
test (leaky abstraction)
• Assertions happening in the PO instead of the test
13. What Makes a Good PO?
• Contains State (e.g., locators)
• Contains Behavior (e.g., methods to interact with
the page)
• Returns some information about the page (e.g., new
page object, text from the page, a boolean result for
some check, etc. — never a WebElement)
• Verifies page ready state as part of initialization with
a found Element
14. A Page Object Rubric
(Each Page Object starts with 100, deduct points as necessary)
Item Belong in POs? Score
> 200 lines of code No -5 per 50 lines over
Assertions No* -5 per (max of -20)
Hard-coded sleeps No -5 per (max of -20)
Implicit wait calls No -10
Explicit waits Yes N/A
Verify page ready state* Yes
-8 if not verifying an
element
Locators* Yes N/A
http://se.tips/se-waiting-jim-evans
16. Locator Strategies
• Class
• CSS selectors
• ID
• Link Text
• Partial Link Text
• Tag Name
• XPath
Good locators are:
• unique
• descriptive
• unlikely to change
That rules a few of these out
17. Locator Strategies
• Class
• CSS selectors
• ID
• Link Text
• Partial Link Text
• Tag Name
• XPath
Good locators are:
• unique
• descriptive
• unlikely to change
That rules a few of these out
18. Locator Strategies
• Class
• CSS selectors
• ID
• Link Text
• Partial Link Text
• Tag Name
• XPath
Good locators are:
• unique
• descriptive
• unlikely to change
That rules a few of these out
Start with IDs and Classes
19. Locator Strategies
• Class
• CSS selectors
• ID
• Link Text
• Partial Link Text
• Tag Name
• XPath
Good locators are:
• unique
• descriptive
• unlikely to change
That rules a few of these out
Start with IDs and Classes
Use CSS or XPath (with care)
20. A Locator Rubric
(Each locator gets a grade)
Item Letter Grade E.g.,
Dynamic locators D-
User account specific
or tied to page render
Tied to page layout D
XPath: / / /
CSS: > > >, etc.
Text on the page C Link text, page copy
Reasonable traversal B
Parent to child w/in an
element node
Using semantic name B+
Input labels
(name='username')
Semantic ID A
Unique, descriptive,
unlikely to change
Non-unique locator? -2 full letter grades
24. Example Report
Drill-down
Test Suite Test Name Test PO Locators Overall
User Login Happy Path 87 90 90 89.4
User Login Invalid Email 92 78 93 88.3
User Login Insecure PW 99 68 91 85.7
User Login Mismatched PW 75 56 90 76.8
User Login Missing Req’d 82 34 95 74.1
Store Page Offer Sort 91 62 85 79.3
Store Page Code-type 83 44 89 74.3
Store Page Sale-type 87 62 87 79.5
Analytics Outclick 88 N/A N/A 88
Analytics Offer Rank 86 N/A N/A 86
Analytics Campaign 87 N/A N/A 87
25. Example Report
Test Level
Test Name Step Locator Score Overall
Code-type Login Xpath(‘//root/@home/@login’) D Relative to root
Code-type Search ‘macys.com’ Css(‘class: q’) B Semantic CSS
LinkText(‘submit’) C LinkText brittle
Code-type Click Macy’s Id(‘macys.com’) A Gold Standard
Code-type Filter Codes Id(‘filterCode’) A Gold Standard
Code-type Click Top Code Xpath(‘@div[‘code-link’][0]’) C Brittle
26. Next Steps
• Simple to implement, e.g. IntelliJ plug-in
• Event Listener
• Java/JS SDKs in testing
• Reporting needs work (conceptually)
• Service Performance
Central setup/teardown with a clean instance for each test
No inter-test dependencies (hard to quantify/observer at times)
Best thing to do is run tests in a random order
Assertions OK for verifying page ready state (better to throw an exception though)
For verifying page ready state, be sure to use an element
— using a URL or page title is such a bummer
Implied method structure that maps to actions on the page
Majority of methods return page information
(e.g., boolean result or new page object) – text returns are a thing…
Some methods can be void (e.g., filling out a form)
Locators should be stored in field variables for reuse throughout a PO