7. Implementing
automated
testing
Software Testing Pyramid
§ Tests at the bottom of the pyramid focus on
smaller sections of code, e.g. unit tests.
§ These tests are the foundation of a good test
automation strategy, they are quick to run and
there should be many of them.
§ They run at the earlier stages of the pipeline.
Unit tests
8. Implementing
automated
testing
Software Testing Pyramid
§ Tests in the middle of the pyramid cover larger
aggregation of code - components, services, etc.
§ Service tests provide many advantages of endto-end tests while avoiding UI complexities.
§ They run only after the build has passed unit
level tests.
Service
Tests
9. Implementing
automated
testing
Software Testing Pyramid
§ Tests at the top cover the "full stack” and are the
slowest to run.
§ Don’t write a test for every acceptance criteria
(antipattern), instead use a few journeys to cover
main areas of the code.
§ They run only after the build has passed both
the unit level and service level tests.
UI
tests
10. Implementing
automated
testing
Working practices
ü
Testers and developers should collaborate to write, run and maintain
tests.
ü
Siloed testing where development hands over tests to QA not only
creates long feedback loops, but also leads to testers duplicating
automated tests with manual tests.
ü
Expensive automated testing tools tend to make the feedback loop
worse. Developers should be able to run all tests, including performance
tests, to help them reproduce and diagnose any issue reported by QA.
12. Implementing
automated
testing
Anti-Pattern: Ice-cream cone
x
Avoid inverting your test pyramid
x
Testing like this through the UI is slow and leads to brittle tests.
x
Avoid using only a UI-oriented testing tool, as that focuses effort on
writing UI-level automated tests.
x
If a bug is found by users, manual testing or high level testing, push a
test to catch that lower down the pyramid.
x
The only tests at a given level should be to test something that can't be
caught at a lower level, i.e. when testing multiple components together,
your tests should only check component integration, not each
component. That should have be done by lower-level tests.
15. Test Data
Types of test data
q Test-specific data: This is the data that drives the behaviour
under test. It represents the specifics of the case under test.
q Test reference data: Data that needs to be there but actually
has little bearing upon the behaviour under test.
q Application reference data: Irrelevant to the behaviour
under test, but needs to be there to allow the application to
start up.
18. go
Continuous Delivery
Learn More
See how Go can help you in
your CD journey
Deploy a great product faster.
Agile teams deliver working software early and often.
Go automates and streamlines the build-test-release
cycle for worry-free, continuous delivery of your
product.