I consider whether we as testers can be too closed-minded in our attitudes, whether there are schools of thought or approaches that, even if we care deeply about context, we are very unlikely even to consider and perhaps that we sometimes favour our reputation over giving ourselves the chance to do the best job that we can.
From CEWT#2, http://cewtblog.blogspot.co.uk/2016/02/cewt-2-abstracts.html
Unveiling the Tech Salsa of LAMs with Janus in Real-Time Applications
Bug-Free Software? Go For It!
1. BUG-FREE SOFTWARE?
GO FOR IT!
James Thomas
@qahiccupps, qahiccupps.blogspot.co.uk
CEWT #2. 28th Feb 2016
2. Cold fusion is tainted, and the taint is
contagious … So the subject is stuck in a
place that is largely inaccessible to reason –
a reputation trap, we might call it.
3. • Reputation
• Preconception
• Peer pressure
There is always enough room to
interpret data in more than one way …
We need to know motivations as much
as we need to know results if we are to
understand science.
4. #NoTesting
So, do we have to test, despite the customer
being unkeen to pay for it? Despite it adding little
or no value from the customer’s point of view?
Funny how so few testers question the
basic premise of their trade
I’ve been observing some new silly ideas about
testing – on how to do as less of it as possible or not
do it at all … reading those isn’t worth the time.
5. A team led by Princeton computer scientist Andrew
Appel aims to exterminate software "bugs"
You can’t get rid of bugs … and it’s stupid to even
think you might be able to!
6. The Analytical School way is to limit themselves to
laboratory contexts where the numbers apply …
I have a fondness for the Analytical School, but … I
must solve the problems that come to me, rather than
the ones I choose.
"One of the things that concerns Cem is the
polarization of the craft … I suppose he wants more
listening to people who have different views about
whether there are best practices or not. To me, that’s
unwise."
I think it’s a Bad Idea to alienate, ignore, or
marginalize people who do hard work on
interesting problems.
I’ve learned a lot from people who would
never associate themselves with context-
driven testing.
7. So When Can Testing Go Wrong?
• When we look inwards too much
• When we don’t apply critical thinking
• When we don’t consider human factors
• When we create reputation traps