Workshop delivered at Test Academy Barcelona on 30th January 2020. Including the Team Test for Testability, Testability Tactics, Testing Smells and the CODS Model.
2. ⢠Distracted by local
optimaâŚ
⢠Where is the
biggest gain to be
made in the system?
⢠Your architecture
silly!
Testers. Always chasing the feature
squirrelâŚ
@northern_tester
3. @northern_tester
If the architectural testability of your system is
poor, it meansâŚ
âŚit wonât get tested
âŚthe tester will test it
âŚit wonât work
âŚit will be hard to operate
âŚit will be frustrating to use
âŚit will make your team sad
âŚit will create bottlenecks
âŚit will make less money
4. What we will coverâŚ
@northern_tester
⢠Team Testability
Test
⢠Your testing
smells
⢠CODS model
⢠Improving your
world
5. Testability is
just a vague
concept
@northern_tester
Testability is
only a
system
property
10. Software testability is the degree to which a
software artefact supports testing in a given test
context. If the testability of the software artefact is
high, then finding faults in the system by means of
testing is easier.
@northern_tester
18. Testing your team for
testabilityâŚ
@northern_tester
Without them suspecting a thingâŚ
19. @northern_tester
Who is thisâŚ
⢠Itâs Joel Spolsky
⢠Writer of blogs at
joelonsoftware.com
⢠FogBugz
⢠Trello
⢠Co-founder of Stack
Overflow
⢠Most importantly though,
The Joel Test!
20. @northern_tester
WTFâŚ
⢠The Joel Test
⢠12 Steps to Better Code
⢠Not all patterns but teams,
working conditions etc.
⢠Insight without the maturity
modelâŚ
21. @northern_tester
Test and Testability
⢠Your testing tells you
about your testability
⢠You can cover a lot of
ground
⢠While not really talking
about testability
⢠Just talking about the
testing you do already
23. @northern_tester
The Team Test
⢠Yes or No
⢠Fast
⢠Social aspects
⢠Technical aspects
⢠Retrospective
⢠Team similarities &
differences
24. Learning
@northern_tester
⢠Testability challenges manifest in our
testing but testing may not be the root
⢠Talking about testability too early can
cause engagement problems
⢠Use existing ceremonies to gather
testability information
⢠Might expose âeasy winsâ such as access
problems
26. Architectural testability allows your team to create tests that
provide valuable feedback within minutes, if not seconds.
Each team member can run a set of tests every time they
make a change. This feedback provides a safety net that
encourages your team members to make changes frequently
without fear. Confidence in testing encourages refactoring
and continuous improvement of the code.
@northern_tester
28. What does good look like?
@northern_tester
⢠Embraces change
⢠Decoupled but
cohesive
⢠Minimizes waste
⢠Sustainable
⢠Relationships
29. Good memoriesâŚ
@northern_tester
⢠Right size microservices, responsible for just enough
⢠Each service has a mock for early testing by
consumers
⢠Code instrumented with structured, contextual events
⢠Consistent API docs expressed in Swagger
30. What does bad look like?
@northern_tester
⢠Slows as complexity
grows
⢠Technical debt pooling
⢠Untested areas
⢠Side effects
⢠Persistent team siloing
31. Painful memoriesâŚ
@northern_tester
⢠Three types of consumer, SOAP, batch, web
⢠Mixed technologies - LAMP but with Microsoft SQL Server
⢠Two minutes for new application test environment, two
weeks for the database
⢠Different logging and monitoring tooling by layer
administered by different teams.
32. @northern_tester
Retrofitting
⢠Is really, really hard
⢠Architectures act as
anchors
⢠Have you ever tried to
change an architecture
for testing?
⢠Tiny changes to move
the needle
34. ControllabilityâŚ
@northern_tester
⢠You can set the system
into its important states
⢠Identify and control
important variables
ď§ Configuration
⢠Control the environment
the system resides in
ď§ Test data
35. ObservabilityâŚ
@northern_tester
⢠Infer internal state from
external outputs
⢠Logs - a message
⢠Metrics â a measure
⢠Events â a collection for
a unit of work
⢠Instrument for the
information we want
37. SimplicityâŚ
@northern_tester
⢠How easy the system is to
understand
⢠Consumers, inputs,
outputs, interactions,
technologies, protocols,
configurations
⢠Transparency
⢠Cognitive load again
43. @northern_tester
My mobile testing storyâŚ
Pull a feature branch, build a development environment, configure
relevant external feeds, find device, join a specific Wi-Fi network,
change DNS, configure a local proxy to intercept and change certain
headers and URI's.
45. What smells do you
recognize?
@northern_tester
Release
Management
Theatre
Mono
strategies
Fear of
change
Teams
looking for
more testers
Too many
user
interface
tests
Valuable
scenarios
not tested
Lack of
resilience
testing
âSunnyâ
days only
Cluttered
logging
with no
insights
Excessive
test
repetition
Issues hard to
isolate and
debug
Tests that
donât die
Lengthy
build
times
Too many
persistent
environments
Environments no
one cares about
Inanimate
documentation
Customer
hand
holding
Poor relations with
Ops
Long lists of
deferred bugs
Hard to test
dependencies
Wrangling
over scale
95th
Percentile?
Tester
turnover
47. Experiencing
@northern_tester
⢠High maintenance automation due to dependencies
⢠Hard to debug and isolate problems
⢠Mock two key external services
⢠Unique tracing id throughout their logging
48. Doing
@northern_tester
⢠Rate each smell for
your current context
based on the handout
⢠If you have friends here
from your company, get
together.
49. Learning
@northern_tester
⢠Causes are not obvious
⢠The whole architecture is the target
area, not just the app.
⢠Looking for smells can yield testability
improvements
⢠Common smells and unique smellsâŚ
60. Doing
@northern_tester
⢠Look where your testing stinks
⢠Determine where to focus CODS
⢠Add to your architecture
⢠Annotate with recommendations
62. Learning
@northern_tester
⢠Smells give us clues on where to focus
⢠CODS provides a framework for
change
⢠Visualizing is great for advocacy
⢠Choose changes carefullyâŚ
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
2000 bugs in 2 years
Communicated through tickets
Longs test cycles against builds on long lived environments
Mastered weirdly named tooling âQuality Centreâ
Left with a weird feeling - we did tons of testing, but we never got any faster, no one got what they wantedâŚ
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.
Controllability determines the dept and breath of our testing efforts - how deep you can go while still knowing what breadth you have covered. Without this you can go down the rabbit hole and miss the bigger picture. Without control testing is pushed later and later,
Controllability determines what scenarios we can exercise - whether it be setting test data to the right state or ensuring a dependency returns a specific response.