Testing in an agile context - Beyond the urban legend
Talk from #Devoxx 2014 on November 13 by @jochimvandorpe on software testing in an agile context? What does it mean for testing methodology when switching from waterfall to agile? Have testers become redundant or rather an essential part of a software development team? How do you keep the pace when functional requirements are changing every two weeks? This presentation is giving no-nonsense advice from hands-on testing experience in a large ICT organisation. #agile # testing #Java #Devoxx
20141113 devoxx2014 jochim van dorpe testing in agile
1. @ Smals_ICT#DV14 # AgileContextTesting @Smals_ICT#DV14 #AgileContextTesting
Beyond the urban legend
–
Testing in an agile context
Jochim Van Dorpe
2. @ Smals_ICT#DV14 # AgileContextTesting
In-house ICT shared services for e-government
• Focus on social security and e-health services
• For federal, regional, local & European institutions
• Based in Brussels
• Software development, ICT-operations & staffing
• >1700 people
Introduction: What is Smals?
3. @ Smals_ICT#DV14 # AgileContextTesting
Introduction: Who Am I?
(QA) ( (lead) (technical) test(er)
analyst/coördinator/manager) (automator/engineer)
I am a tester
4. @ Smals_ICT#DV14 # AgileContextTesting
Introduction: What do I have to tell?
5. @ Smals_ICT#DV14 # AgileContextTesting
What is testing?
It's not about proving that the software works, it's about finding bugs
6. @ Smals_ICT#DV14 # AgileContextTesting
Why is testing necessary?
To prevent:
• Death
• (Serious) injury
• Loss of business
• Loss of reputation
9. @ Smals_ICT#DV14 # AgileContextTesting
Agility
Agility is the continuous
delivery of prosperity (value) to
stakeholders of a system in a
sustainable and balanced
manner
-Schalk Cronjé (@ysb33r)-
10. @ Smals_ICT#DV14 # AgileContextTesting
Agile fundamentalists are like …
Did you even read
our manifesto?
11. @ Smals_ICT#DV14 # AgileContextTesting
Agile manifesto
Of course …
• Individuals & interactions over
processes and tools
• Working software over
comprehensive documentation
• Customer collaboration over
contract negotiation
• Responding to change over
following a plan
12. @ Smals_ICT#DV14 # AgileContextTesting
Urban legends
• Agile doesn’t need testers
• Agile = de facto good code, built-in quality
• Testing is dead
• Agile = TDD, ATDD, BDD
• Testers should be able to code
• Agile hasn’t time for testing
• 100% automation
• Faster, shorter, better!
• Developers and Testers are like oil and water
• You only need to unit test
• User Acceptance Testing is no longer necessary
13. @ Smals_ICT#DV14 # AgileContextTesting
So…
• The Agile manifesto shouldn’t stand between you &
good quality
• No documentation, few documentation or poor
documentation is no excuse for “not testing”
• Don’t put an Agile stamp on your team, just to avoid
the things you dislike, but find a way of doing them
so you like them …
28. @ Smals_ICT#DV14 # AgileContextTesting
A. Automate More
• The testing triangle: This will drown your testers:
Manual tests
Acceptance tests
System &
integration tests
Comp.
tests
29. @ Smals_ICT#DV14 # AgileContextTesting
A. Automate More
• The testing triangle: Make sure your testers can
cope with the pace
Component tests
System &
Integration tests
Acc.
Tests
Manual
tests
Manual tests
Acceptance tests
System &
integration tests
Comp.
tests
30. @ Smals_ICT#DV14 # AgileContextTesting
A. Automate Testing, not only tests
Test
execution
checking
Load
Data-
sets
Reset
DB
Continuous integration
Logging
results
Generate
reports
Live
dashboards
Calculating
‘metrics’
Use the tools that fit your purpose
AND NOT THE OTHER WAY AROUND!
31. @ Smals_ICT#DV14 # AgileContextTesting
B. Explore
Leave some room for
exploratory testing
32. @ Smals_ICT#DV14 # AgileContextTesting
C. Diversify
• …in the test levels
• Unit-tests only won’t find
everything
• System test only, and you’ll
drown
• … in the way of testing
• Automation is good, automation
is fun, but in the end, an
automated test isn’t better than a
human based test (manual test)
33. @ Smals_ICT#DV14 # AgileContextTesting
D. Test the tests
• GREEN == GOOD ?
• DO: Review tests
• DONT: No test coverage fairy tales
• DONT: Mock lasagna
• DONT: I should adapt the test cases so it’s green again
34. @ Smals_ICT#DV14 # AgileContextTesting
E. Crowdsource …
• Crowdsource in your team to find
the good & passionate tester
• Accept that there are learning curves
• Testing is a craftmanship
• A passionate tester will tend to be more
effective than the chinese-volunteer-
tester
• Encourage collaborative testing
35. @ Smals_ICT#DV14 # AgileContextTesting
… or ask a testjumper
• Ask a ‘test jumper’ to join the
team
• May spend some hours as a consultant
• Or months as an ordinary tester
• Coaches or contributes himself
• Encourages testability
• Help the developers think
productively about testing
37. @ Smals_ICT#DV14 # AgileContextTesting
What should the PL not expect?
• Metrics such as:
• # of bugs found
• Bugs per functionality
• Bugs per 1000 lines of code
• DDP (defect detection percentage)
• We can however show our QA-efforts
• QA-survey
• Test case progress
• Fixed test cases
38. @ Smals_ICT#DV14 # AgileContextTesting
What should the PL not expect?
• Qualification (good / bad) of the devs
• 100% green charts
• Upfront fixed number of test cases
• Automated self-generating auto-code-correcting
tests
39. @ Smals_ICT#DV14 # AgileContextTesting
conclusion
There are no agile testers,
but we need testers who
adapted their mindset to an
agile context
40. @ Smals_ICT#DV14 # AgileContextTesting
Read & experience more 1/2
• Books:
• Lisa Crispin, Janet Gregory (2009). Agile Testing: A Practical Guide for Testers and
Agile Teams
• Lisa Crispin, Janet Gregory (2014), More Agile Testing: Learning Journeys for the
Whole Team
• Blogs:
• Huibschoots.nl
• LisaCrispin.com
• http://testobsessed.com : Agile testing overview
• Http://www.satisfice.com : Test jumpers: one vision of Agile testing
• pascaldufour.wordpress.com
• agile-and-testing.chriss-baumann.de
41. @ Smals_ICT#DV14 # AgileContextTesting
Read & experience more 2/2
• Presentations:
• 40 agile methods in 40 minutes by Craig Smith
• Agile testing quadrants explained by @RubyTester
• Certification:
• CAT-training: Certified Agile Tester
• ISTQB foundation level add-on: Agile tester
• Conferences:
• Agile testing day(s)
• More references:
• Word clouds generated on wordle.net
• Images found with google image search
I was asked by my employer to tell a few words on how we, the testing department, survived the shift from classical waterfall projects to agile
My project managers like to call me a lead technial test analist coordinator engineer & automator, but I prefer the more simple term tester …
I started in IT almost 10 years ago. First as a functional analyst, but after a project where I was the chinese volunteer for testing I refused to take up my old job as an analyst and wanted to spend my time on getting better at testing.
Testers aren’t widely spread in my organisation so I take up some different testing roles, depending on the project I’m working on. That ranges from test coördination, over test analyses, test execution, … to test automation.
I was asked to talk today on how we shifted our mindset for testing from a classical waterfall project to an agile way (Scrum in our case) of working.
But let me begin with some introductionary words on testing …
What is testing? Most people will refer to it as proving that the software does what it needs to do & that it’s bugfree.
Me on the other hand, I never dare to state that when I’m done testing in a project that the software is bugfree … I only dare to say that I tested it as good as possible in as many different situations as I could come up with, within the time and budget restrictions I was given. But still, this is not a guarantee that every single bug is found. So I’d rather define testing as an actvity that involves finding bugs, and estimating the quality of the software by the pace you find the bugs.
So why is testing necessary? Why is somebody willing to pay testers to do what they they do?
They are so because they hope that we, by using their software could prevent death or injury (ie. Radiation machines), loss of business (i.e. losing money in transactions, calculationerrors) or loss of reputation (i.e. flight delays) in real life situations
So to prevent those four (see previous slide) unwanted effects by using the software we test to:
find the defects that are present or possible future defects
we’ll give managament and the client confidence in the product
and provide information about what it can and can’t do…
In the typical waterfall model the testing was pretty straightforward. All the requirements were handed to us and we could start making a plan for the testing project.
Once our planning was clear, we could start making test cases or test scenario’s …
When the software was ready the development team threw it over the hedge and the test team could start their test implementation and execution phase and it would bounce back and forth until the project manager was satisfied with the obtained quality, or when the deadline was due …
Thus in the waterfall-method the testteam was some sort of gatekeeper of quality between the developmentteam en the production-environment.
It had to be said, that in theory it’s like that, because when a project is running behind on the planning, or is overconsuming on budget (and which project isn’t?) it will most likely be cut away in testing.
In 2011 however, 10 years after agile was ‘invented’, some agile fundamentalists tried to overthrow the waterfall-trone and make all its followers walk the plank.
So we needed to change if we didn’t wanted to be eaten by the agile sharks … because being the gatekeeper at the end wasn’t going to work in such an iterative environment.
We wouldn’t go down without a fight, ‘cause our clients still pay for a qualitative product, and the end-users don’t care at all in which methodology the software was made. He will not be more forgivefull because it’s made in an agile-way…
So for a starter we took the agile manifesto and proved our agile fundamentalists that we could fit in. According to them you had three kind of people in an agile team:
Team members: who contribute to added business value
Product owners: Who represent the client
Scrum master: who leads the meetings and is also a team member
And clearly we (the testers) aren’t directly one of those … So we needed to prove ourself as team members of the agile team.
We (testers) are individuals and we interact constantly with analysts, developpers and clients about what should, could or must.
Working software is the reason of our existence …
We collaborate with the customer i.e. for UA tests
So we ‘only’ had to prove that we could also sufficiently test without a ‘test plan’ and in the same velocity as the developers.
Nevertheless over the past few years I encountered some urban legends about testing in an agile context, which made me come up with title of this talk. Those urban legends ranging from:
Our software is good because we are agile: Like the software knows in what context it has been developed
over
We’ll automate everything
To:
Agile doesn’t need testers. Testing remains a craftmanship. That’s like walking in your butchers store and telling ‘Butcher, from tomorrow on you’ll work agile, so from then on we’ll see bread and pastries in your counter mister!’
So:
The Agile manifesto shouldn’t stand between you & good quality
No documentation, few documentation or poor documentation is no excuse for “not testing”. If there is no documentation to use as a test oracle, there are still the analysts, the product owner, the acceptance criteria of the stories …
Don’t put an Agile stamp on your team, just to avoid the things you dislike
Off course there are differences, but they don’t have such a dramatic impact that:
A ‘waterfall-tester’ should throw his skillset overboard
Testing is no longer necessary
We should ‘just’ adapt our mindset to:
The Iterative & incremental approach
Less time to prepare, execute & report
Less certainty: change is common
More Teamwork
Continuous critical thinking
We searched some heuristics (rules of thumb) wich would be useful in our way of agile, Scrum, our interpretation of scrum. We found some that are usefull for us … and could be usefull for you. But scrum isn’t the only way to be agile … there are more … many many more ways to agile …
So if it’s usefull in your projects to be more effective in reaching your goals , use them…
But maybe what I’m telling here won’t be usefull at all for you … There’s no one single truth in this. Every team is different, and every implementation of agile is different. Even in my own organisation, when I’ll go and explain to other teams how they could adopt our way of testing I make some tweaks and finetuning because some artifacts aren’t there or some competences are less or more present in that team than it was the case in ours…
So to begin we searched for some principles that differed from the classical waterfall way
Testing should no longer be the quality-gatekeeper at the end. Every X weeks we have to deliver a stable, useful and tested product to the product owner, so only testing before going live would be contra-agile.
Here we use the tests to build something good from the beginning, something that is allready good when it leaves the developers desk.
So we should know upfront what we will test, we should test something before it’s completely done … or even make the implement tests in advance with methods like BDD, TDD, ATDD, …
In agile we test continuously, there is no testing phase, we test from the first till the last day. It’s the only way we can ensure that at any given moment in time, we can provide a qualitative version of our software.
So we should put something in place that can test continuously & integrated
In waterfall projects you have the designated testers who test all … here we put the testers in the development team and everybody contributes at his capabilities, wheter it be the description of tests or the automation of the scripts.
Tests can be executed very quickly after the developer coded a specific part of software, so when the dev. gets feedback on his code, it’s still fresh in his mind
Buggy software is harder to test, and the longer the bug exists, the more difficult it will be to fix these bugs, and other bugs will be build on these first bugs.
So deal with the bugs as soon as they are found, don’t pile them up but let the developper deal with it immediatly. A story should never be closed when there are known bugs, and a developper should work on his sories/cards until the quality is sufficient, before passing on to a next one
Testers in a agile context should be capabale to find a good balance between testing for implicit expectations and making up expectations.
Due to the fact of less documentation some things will be explained in less detail. Don’t exagerate the assumptions you make! For example, be less demanding for a scoreboard for a bowling-application than for the controlboard of a spaceshuttle.
Reduce time on overhead that doesn’t really add something to the quality of the product:
Calculating metrics
Manually making charts
…
Also reduce time on the description of your test cases / scenario’s. Keep the descriptions high level, or don’t write them at all but shift to some test charters for exploratory testing. It will cost you less when writing them, but due to the fast evolving nature of the features in agile, it will also be beneficial on maintenance.
Don’t believe in fairy tales, it’s not because there are tests that it is a qualitative product.
For us, a story/card/functionality/feature can only be marked as done when it is implemented and tested, and by tested I mean tested, debugged and retested. Define what you expect of the devs. and invest in static testing (reviewing) as well.
I’ve seen approaches were teams had testing-sprints or bug-fixing sprints, or where testing happens one sprint after the coding. But that tends to lean towards waterfall… ‘cause who can garantee you have a working product if you plan testing / bugfixing for 5 sprints later
We tried to put those principles we gathered and finetuned to something practical and concrete we could use …
The first one was to put more effort in automation.
In a classical approach you could do all of your system test and acceptance tests in a human based (manual) way. As long as you have enough testers in the testing phase and some people coördinating it, the tests will happen.
In an agile conetxt, that’s another story. Because every sprint, the whole package should be regression tested. So if we needed 10 testers for one month to complete all system and regression tests in our waterfall context, we pile up 200 mandays of testing per release. Our agile scrum sprints contains 10 working, and let us say that we can system test in 7 of those 10 days, but regression test in only 2 of the last days. So we have 2 days to perform 200 mandays of testing, thus we need 100 testers which we can give work for only two days per sprint so wehere are we going to find them …
So we had two options:
We skipped the testing …
We automated them so they could be executed in a few hours
And we chose the second … (to be concrete: we execute masses of unit tests; +/- 100 integration tests; +/- 1500 system tests & 40 E2E-tests in less then 4 hours).
So we had two options:
We skipped the testing …
We automated them so they could be executed in a few hours
And we chose the second … (to be concrete: we execute masses of unit tests; +/- 100 integration tests; +/- 1500 system tests & 40 E2E-tests in less then 4 hours).
The remaining manual test, those who can’t be, or better not be automated can be done in the available time in the sprints.
There’s a saying that says that a good tester, is a lazy tester. I won’t say here that I’m a good tester, but I am most certainly lazy when it comes to performing recurrent, redundant & boring tasks
Also, don’t only automate checking, but as much as possible of your testing proces:
flushing db’s
Loading datasets
Automatic sheduled execution (continuous integration)
Logging results
Logging screenshot when tests fails
Generating test reports
…
The less time your testers have to spent on these redundant tasks, the more time they will have for intelligent testing.
In the human based time slots, leave some room for exploratory testing. The more ‘freely’ the application is, the more time that should be invested in this. So more for web apps than for batches or backends
Diversify in width and in depth:
With only unit tests you won’t find all bugs on the functional level, but in many cases it would be impossible to automate all the possible system tests …
Don’t run automated tests only, and when having time or budget, adapt your datasets from time to time
Question your tests. Does the test implementation does what it needs to do, or is it altered just to return a green flag on the report?
Is your coverage actually high or are you just testing 100 times the same thing?
Don’t eat to much mock lasagne in your automated test suites. Add some End2End tests because real life interaction doesn't necessarily end up with the same results as testing with stubs and drivers.
You’ll propably need more people to involve in testing in your agile projects than you’ll find …
So find those who are willing to do it because they will more effective in doing it.
Also encourage collaborative testing,: give certain test tasks to other people, in example you can let your developers automate, so you as a tester have more time for other stuff and he doens’t necesarelly need to have coding skills.
If you don’t find the right person or skills in your own team, you could search for a ‘test jumper’ within your organisation:
A test jumper basically asks, How are my projects handling the testing? How can I contribute to a project? How can I help someone test today?
Specifically a test jumper:
may spend weeks on one project, acting as an ordinary responsible tester.
may spend a few days on one project, organizing and leading testing events, coaching people, and helping to evaluate the results.
may spend as little as 90 minutes on one project, reviewing a test strategy and giving suggestions to a local tester or developer.
may attend a sprint planning meeting to assure that testing issues are discussed.
may design, write, or configure a tool to help perform a certain special kind of testing.
may coach another tester about how to create a test strategy, use a tool, or otherwise learn to be a better tester.
may make sense of test coverage.
may work with designers to foster better testability in the product.
may help improve relations between testers and developers, or if there are no other testers help the developers think productively about testing.
So to conclude, the process of our test project in agile will look similar as in a waterfall project, but:
You probably can’t make a test plan for the whole project
It will be iterative for every sprint
Analyses & design will be more high level
There will be more test implementation and execution will be more (but not exclusively) automated.
Most PL’s like charts, metrics, points, graphs, … and other tangible objects that they can A) show to the customer, or B) put a label on the team-members…
In agile we can give them no more number of bugs found, number of bugs per functionality, … because most bugs will be found and solved by the developers themself. They will never leave the devs. Desk so we cannot count them anymore …