The document discusses different layers of automated acceptance testing and compares different toolsets for each layer. The three layers are: 1) Acceptance Criteria Layer which defines test scenarios in a business-readable format, 2) Test Implementation Layer which implements tests against the system under test, and 3) Application Driver Layer which executes tests against the application. Popular tools like Cucumber, Concordion and FitNesse are compared across the layers in terms of their functionality and how each layer is addressed. The document also provides references and suggestions for further exploring related testing concepts.
2. Business facing tests
The acceptance test suite as a whole both
verifies that the application delivers the business
value expected by the customer and guards
against regressions or defects that break pre-
existing functions of the application. As such the
acceptance criteria must be executable
specifications. They also pick up problems not
found in unit or component tests.
see Farley & Humble, 2010
4. Layer 1:
Acceptance Criteria
Feature:
Potential customers need to be able to find company information, services and
address details when searching on the internet
Scenario:
Given I am using google
When I search for ’XXXXX'
Then the top result should be ’YYYYYY'
Scenario:
Given I am using google
When I search for ’QQQQQQQQ'
Then the results include the text ’Y2Y2Y2Y2'
5. Many toolsets: but there are
types
Cucumber (ruby, java, .net and many other
ports)
- uses plain text to arrange tests
Concordion (java, .net)
- uses html to arrange tests
Fitnesse (java, .net)
- uses wiki to arrange tests
7. Layer 2:
Test Implementation Layer
• Ideally creatable, readable and reportable to
business
• Connects the layer above and below in a way that
is maintainable
• Usually maintains data setup, teardown or proxies
and stub data
• Written in a host language (Ruby, Java, C#) – but
doesn’t need to the same as the system under
test
• Often done as a domain specific language (DSL)
8. Learnings
• Cucumber: written by coders (devs or testers)
with the help of simple regular expressions
• Concordion: some tag code is written in html
and must match that in this code layer
• Fitnesse: there is no test implementation layer
when using FitLibraryWeb
9. Also
• Cucumber: ability/need to create application
abstractions
• Concordion: ability/need to create application
abstraction
• Fitnesse: this approach it is less domain and
more technology focussed (although without
FitLibraryWeb you can create application
abstractions)
10. Layer 3:
Application Driver Layer
• Works against our system under test (in this
example it is the browser against google)
• Abstractions work best as the tests grow –
eg Window Driver Pattern
11. 3rd Party Adapters
• Cucumber: there are loads of libraries to run
against particularly in Ruby
• Concordion: Selenium WebDriver is
effectively built in – but you start working
harder for the other functionality found in
FitLibraryWeb
• Fitnesse: FitLibraryWeb is designed to make it
easy to run against browsers (headless and
selenium webdriver), webservices
(SOAP, xml), PDFs, shell scripts, databases &
creating stub services and proxies
12. Maintainability: FitLibraryWeb
• Extremely well written and understandable
abstraction of a technical domain
(webservices, pdf, proxies, shell)
• We’ve been using the mock webservices
functionality in our test implementation layer to
provide stubbed data in concordion
• Demonstrates a DSL standardising tests (and
abstracts away both the test implementation
and application driver layers for reuse)
13. Issues to balance
Collaboration/Communication
• Language used: technical vs natural
• Role focus: developer business
• Facing: technology business
Technical
• Data setup/teardown, test doubles
• Test strategy: unit integration system
• Test suite size and refactoring support
14. Exercise
In groups of 3-4, read the scenarios of
companies.
Think through the goals, which layers are most
affected and reason out your tool choice on
how to best verify that their software works!
15. Exercise: which tool and why?
You are in a company that is moving toward selling all your services via the
internet sales including confirmation via email. Your back office systems are a
mixture of java and mainframe but it are all mediated via soap web services. The
developers say they are unit testing. In practice, most testing occurs after
development ensuring applications work in the high socialised environment.
You are part of a team that supports a real-time system. The regression suite
currently takes two weeks to run through. The team is frustrated and can see that
the test suite is only growing in size and has a lot of duplication. Furthermore, it is
understood by only a few mostly because of it sheer size. They really want tests
they can run at any time to look at the state of the system.
Your team develops a component library. It is used by other customers as part of
their product suite. There have been complaints about its quirky usage.
You are working as part of small, cross-functional team. Your business person
(customer, product owner) is actively involved. There are complex rules - the
subtleties of which often unfold through development and test.
16. Further areas
Test Automation Pyramid
• Testing is layered. Acceptance testing using these tools is generally a
subset of system testing. We haven’t looked at how acceptance criteria is
dealt with in BDD developer-style testing (eg
JBehave, StoryQ, SpecFlow, rSpec, Spock).
Agile Quadrants Model
• Important for understanding the distinction between business-facing and
technical-facing tests. Acceptance tests can be either but try to avoid
conflating the two. Your toolset choice should be aligned
Deployment Pipeline
• Acceptance tests must be automated in continuous delivery. This will
require work for build scripts and data and configuration management
Specification By Example
• Working as a team through examples is enabled by these sorts of toolsets.
17. References
• FIT: Framework for Integrated Testing, Mugridge & Cunningham
• Specification By Example, Gojko Adzic
• Chapter 8, Automated Acceptance Testing in Continuous Delivery, Farley &
Humble
• http://prezi.com/rpcnvziw80no/specification-by-example-anztb/
• http://www.slideshare.net/nashjain/acceptance-test-driven-development-
350264
• http://gojko.net/2010/10/06/top-10-reasons-why-teams-fail-with-atdd/
• http://testobsessed.com/wp-content/uploads/2011/04/atddexample.pdf
18. Good luck
Todd Brackley
goneopen.com
twitter: toddbNZ
code: github.com/toddb/AutomationTutorial