There are cognitive biases lurking everywhere in the research process. Cognitive biases are psychological tendencies that cause the human brain to draw incorrect conclusions.
We all want our research to provide reliable input into our projects and most of us wouldn’t deliberately distort data. Yet, we’re human, and we’re all susceptible to many cognitive biases that can affect the outcomes at any stage of our projects. Biases is unavoidable, but being a good researcher is about understanding our inherent biases and how we can minimise the effects.
Distorted or misleading results can be very detrimental to a project. It can misinform the direction of a project, or provide false confidence about decisions.
This session will highlight five common cognitive biases in research, from recruitment, to the actual sessions, and the analysis and reporting of research findings. This will be illustrated with examples and stories, along with how we can minimise the bias.
The researcher’s blind spot: 6 cognitive biases we shouldn’t ignore in research
1. The researcher’s blind spot:
6 cognitive biases we shouldn’t ignore
in research
UX Australia 2016, Melbourne
Ruth Ellison, Principal User Researcher, PwC’s Experience Centre
16. @RuthEllison from PwC’s Digital Services
Multiple channels - mix of
recruitment companies, social
media, trusted networks
Avoid professional respondents
Careful screeners
Behavioural based recruitment
17. @RuthEllison from PwC’s Digital Services
Running research sessions
Photo available under a CC by 2.0 licence: https://www.flickr.com/photos/gdsteam/20649386153
20. @RuthEllison from PwC’s Digital Services
Loftus, E. F., & Palmer, J. C. Reconstruction of automobile destruction: An example of the interaction between language and memory. Journal of Verbal Learning and
Verbal Behavior, 1974, 13, 585-589.
McLeod, S. A. (2014). Loftus and Palmer. Retrieved from www.simplypsychology.org/loftus-palmer.html
About how fast were the cars going
when they smashed into each
other?
21. @RuthEllison from PwC’s Digital Services
About how fast were the cars going when
they collided into each other?
About how fast were the cars going when
they bumped into each other?
About how fast were the cars going when
they contacted into each other?
22. @RuthEllison from PwC’s Digital Services
Triangulate research
Use observational methods
Keeping positive-neutral body
language, watch the tone of your
voice
Avoid leading questions
28. 28
Don’t just confirm your hypothesis, see if you can prove it wrong
Analysis of Competing Hypotheses (ACH)
29. 29
Use open ended questions
Some people think that soft drinks are bad for you.
What do you think?
What’s your opinion about soft drinks?
30. @RuthEllison from PwC’s Digital Services
List assumptions
Be skeptical, especially if everyone
agrees with you
Remain open
Consider all evidence equally
Multiple user researchers
Leave your ego by the door.
@RuthEllison from PwC’s Digital Services
33. @RuthEllison from PwC’s Digital Services
Triangulate with other research
methods (e.g. observational)
Use a mixture of individual and
group exercises
Avoid stating preferences and
expectations at start
Give someone the devil’s advocate
role to question assumptions
37. @RuthEllison from PwC’s Digital Services
Consider the order of questions and
designs carefully
Open ended questions
Alternate order in which participants
are shown concept or design versions
Show version A first
1 3 5
2 4 6 Show version B first
43. @RuthEllison from PwC’s Digital Services
Sample sizes – it’s about the WHY
Consider evidence equally – not just
the ones that confirm your
belief/assumption
Collaborative analysis sessions
Strive for objectivity
48. @RuthEllison from PwC’s Digital Services
Listen with an open
mind
Become more rational, but less
rationalising
Continuous learning
48
Always assess your method,
your analysis and yourself for
bias
52. @RuthEllison from PwC’s Digital Services
Further reading
List of cognitive biases
You are not so smart: a celebration
of self delusion
9 Biases In Usability Testing
52
Notas do Editor
I was at the movies recently to watch Star Trek (don’t judge!). If you’re ever been tempted to buy popcorn (like I have, many times), it’s sometimes tricky trying to work out which size popcorn to buy.
Sit down if you would buy the small popcorn for $3
Sit down if you would buy the medium popcorn for $6.50?
Remain standing if you would buy the large popcorn for $7?
Ok, you can now all sit down – thanks!
When popcorn is presented with two options, most people will pick the cheaper option.
Look, they just spent a fortune buying movie tickets!
When presented in this way, more people purchased the large popcorn.
So why is this the case? This is an example the decoy effect - where people will change their preferences when a third option is presented. The third option services as a decoy to increase a preference for a dominating option.
The idea is simple - price one item in such a way that it makes the other price seems very reasonable.
Here’s an example of a decoy effect from Telstra’s website - they’re clearly using the Medium and Large pack as the decoy.
A cognitive bias is a mental shortcut. Cognitive biases are psychological tendencies that cause us to use a number of simplifying strategies and rules of thumb to ease the burden of mentally processing information to make judgements and decisions. These rules of thumb (or heuristics) are often really useful in helping us to deal with complexity and ambiguity. But in many instances, cognitive biases can lead to faulty judgements.
There are cognitive biases lurking everywhere in the research process.
But we are ALL HUMAN.
We all want our research to provide reliable input into our projects and most of us wouldn’t deliberately distort data. Yet, we’re human, and we’re all susceptible to many cognitive biases that can affect the outcomes at any stage of our projects. Biases is unavoidable, but being a good researcher is about understanding our inherent biases and how we can minimise the effects.
Failing to account for our cognitive biases in research activities can lead to participants being unintentionally and unknowingly influenced in to producing biased responses. Distorted or misleading results can be very detrimental to a project. It can misinform the direction of a project, or provide false confidence about decisions.
What happens when research provides false confidence about a decision?
Here’s a story about coke Coca-Cola Company lost millions of dollars due to a research mistake.
In the mid-1980s, the Coca-Cola Company made a decision to introduce a new beverage product (Hartley, 1995, pp. 129–145).The company had evidence that taste was the single most important cause of Coke’s decline in the market share in the late 1970s and early 1980s.A new product dubbed “New Coke” was developed that was sweeter than the original-formula Coke.Almost 200,000 blind product taste tests were conducted in the United States, and more than one-half of the participants favoured New Coke over both the original formula and Pepsi. But participants were never told that this was to replace the original coke formula. The new product was introduced and the original formula was withdrawn from the market. This turned out to be a big mistake! Eventually, the company reintroduced the original formula as Coke Classic and tried to market the two products simultaneously.Ultimately, New Coke was withdrawn from the market.
We’re going to be looking at a number of biases that affect both us at each major stage of design research
Quite a number of years ago, I was helping to organise a research session for a famous institution in Canberra. We wanted to evaluate the effectiveness of the online catalog that was used to find items in the collections. The team had a budget constraint (sounds familiar?) and couldn’t use a recruiter to find research participants. They also had a short amount of time. We ended up running a survey on the website and asking for research participants from the survey.
We got quite a number of participants from the survey and we use this to organise research sessions in Canberra where we learned a lot of interesting things about the website.
But recruiting in this way is an example of a cognitive bias known as selection bias.
There are a number of selection bias that can happen - one way is sampling bias, which is where a non-random sample of a population may be selected which can result in a biased sample. In my previous slide, the sampling bias occurred because we were relying on one method of recruiting - we were using an online survey and asking for people who would be happy to take part in our research. This resulted in getting people who were already familiar with using the online channel and also people who self-selected to take part in our research. While we still got interesting insights from our research about what was working and not working with our website, we had a lost opportunity to find out how the website served the needs of users who may not be using the web as a primary channel.
Another example of selection bias to consider is time – when we do a research study at a point in time can affect the results, or may support a desired or expected outcome.
I’m currently working on a project that is focussed on simplifying attendance processes for families, providers of child care services and government. We were running a discovery research piece earlier this year. At the start of the discovery phase, our initial families that we talked to were sharing about a big pain point for them – their child care rebate running out before the end of the financial year. This theme kept coming up in the first two weeks of the research but we discovered that this was a time related issue – this happens to certain families in situations where their children are in care for a significant part of each week, which can cause their rebate to run about from around February to June. As our research was conducted in that time frame, this was at the forefront of people’s minds.
We had to be careful that results from people sampled during this period had to considered over the course of a year – not just what was at the forefront of their mind.
Whilst these bias may not necessarily mean the results and analysis are wrong, it is important that these bias are recognised in analysis. Particular demographics may have different opinions that do not necessarily represent that of the general population.
Ways to deal with selection biases include:
Multiple channels - mix of recruitment companies, social media
Avoid professional respondents – I once ran a workshop where the participants walked in and knew each other by name! These tend to be professional respondents with a goal to earn a part-time salary from focus groups and survey incentives.
Careful screeners – use screeners to help weed out professional respondents and participants that are just not appropriate.
Behavioural based recruitment – when I started doing research years ago, I would recruit based on a set of well researched and well established demographics that various marketing and communications teams would provide. But years ago, an article featuring the lovely Dana Chisnell, discussed why behaviour/motivation based research.
It was a number of years ago at the start of research career. I was listening to this parent tell me about the trials and tribulations of parenting (she wasn’t selling parenting to me!). We were doing a one on one interview behind a one way mirror to explore her interactions with a particular parenting paper based claim. This mum had never been in a research session before so wasn’t sure what to expect.
Anyhow, she was telling me about the time she was trying to filling in the form and I had the form in front of us. We knew that there was one particular part of the form that was quite problematic (but another team wasn’t convinced that that area of the form was the problem). As she started moving her way down the form, to the bit that we were unsure about, I started to subconsciously lean in towards her. She started to slow down and started talking about this particular area of the form. I kept nodding my head and smiling. I occasionally injected, ‘oh that’s interesting’.
The observer expectancy effect, also called the experimenter expectancy effect, is a phenomenon that can occur when a researcher’s beliefs or expectations cause him or her to unconsciously influence the research participants.
Think subconscious “uh huh”, head nods and smiling when the participant is going down the “right path”. Or scribbling down notes and having the participants say “Have I done something wrong?”
In another study (Loftus & Palmer, 1974), subjects saw films of automobile accidents and then answered questions about the accidents. The wording of a question was shown to affect a numerical estimate.
In particular, the question, “About how fast were the cars going when they smashed into each other?” consistently elicited a higher estimate of speed than when “smashed” was replaced by “collided,” “bumped,” “contacted,” or “hit.”
I’m going to give you a three number sequence. I have a rule in mind that these 3 numbers obey. I want you to try to figure out this rule.
You can find this out by suggesting your own three numbers and I’ll say yes it follows the rule, or no it doesn’t.
Keeping suggesting your own three numbers until you’re sure of the rule and then tell me the rule. Are we ready?
This experiment is part of the Wason’s Rule Discovery Test. When most people try this experiment, we form a hypothesis and then try a number of sequences that confirm the hypothesis. Very few people tried a number sequence that might DISPROVE their hypothesis.
The Wason’s Rule Discovery Test proves that most people do not try to test their hypotheses critically, but rather to confirm their hypothesis.
This mental shortcut is known as confirmation bias.
Confirmation bias is considered one of the most dangerous biases. This is the tendency to search for or interpret information in a way that confirms your beliefs.
Common factors such as internal politics, personal goals or simply lack of knowledge can turn into a cherry picking exercise, where researchers or our stakeholders, may consider some results and ignore others.
Leaves you open to new evidence
It’s not about proving your hypothesis – it’s about using null hypothesis. The null hypothesis (H0) is a hypothesis which the researcher tries to disprove, reject or nullify.
#YayScience
Analysis of Competing Hypotheses (ACH). Developed by Richards Heuer, it consists of a matrix with possible hypotheses (or scenarios) across the top, with each individual piece of evidence or information going down the side.
This can be done with 10 pieces of information or hundreds. The most important point is that each piece of evidence must be evaluated individually against each hypothesis, and market as “consistent,” “neutral,” or “inconsistent.” By going “across” rather than “down” the matrix, it helps analysts to think critically about each piece of evidence, as opposed to thinking about each hypothesis. This method can generate a numeric score by tallying “inconsistent” ratings, but the analyst needs to be able to make a judgement call regarding the importance of each piece of information. Although it can be applied to quantitative and qualitative research, this method itself is NOT a quantitative method. Ultimately, each hypothesis is evaluated by how much “inconsistent” evidence is in its column, and this is compared to the analyst’s prior judgments. This method is best used to identify hypotheses which are problematic, not to diagnose a hypothesis which is most likely.
Source: https://en.wikipedia.org/wiki/Analysis_of_competing_hypotheses
Accept that we all have assumptions
Groupthink and the bandwagon effect refers to the tendency to do (or believe) things because many other people do (or believe) the same. With Groupthink, group members try to minimise conflict and reach a consensus decision without critical evaluation of alternative viewpoints by actively suppressing dissenting viewpoints
Source: https://en.wikipedia.org/wiki/Groupthink, https://en.wikipedia.org/wiki/Bandwagon_effect
https://youtu.be/MDD4IkVZWTM?t=52s
Focus groups feel like such a quick easy way to engage with large number of participants in a short amount of time (which is why it’s a favourite of so many consultancies), but as design researchers, if you’re going to use focus groups, make sure you use a mixture of research methods. My favourite is observational methods.
If you’re having to run a focus group or any kind of group exercise, use a mix of individual and group exercises. Particularly start with the individual exercises, then do a playback with the bigger group.
The anchoring bias refers to the tendency to rely too heavily, or "anchor", on one trait or piece of information when making decisions (usually the first piece of information acquired on that subject).
Leather jacket – price is $1000
Sales person says that it’s on sale for $400!
Anchoring particularly comes into effect in UX research when more than one version of a product or concept is shown to research participants, such as in A/B testing. Participants may often be more inclined to prefer the first version they are shown, which can skew the research results significantly if efforts are not made to prevent this bias.
To further complicate the issue, we are often inclined to select a ‘middle’ option, or have a central bias, if there are many options presented to us in any vertical or horizontal order.
Read more at: https://youarenotsosmart.com/2010/07/27/anchoring-effect/
Ask
general questions before specific questions
unaided before aided questions
positive questions before negative questions
behaviour questions before attitude questions
Ordering your topics, questions and activities needs some judgment. Ask yourself if the order sequence causes bias. Change the sequence. See what makes sense.
Typically, anchoring can be prevented by alternating the order in which participants are shown versions.
As humans, we’re pretty good at identifying patterns. Finding patterns helps us make sense out of the world. This tendency is so automatic that sometimes, our brains are just too good and they find patterns that aren’t there.
In cognitive science, this tendency to perceive meaningful patterns with random data is called apophenia.
One from of apophenia is finding images or sounds in random stimuli. This is known as pareidolia.
Read more: http://bias123.com/clustering_illusion
Does anyone see a pattern in these pieces of toast?
On a side note, the ‘Virgin Mary’ toast on the right sold for $28,000 back in 2004!
This is known as the clustering illusion which is the tendency to wrongly interpret clusters or streaks of data in small sample size to be statistically significant
Collaborative analysis sessions – also with people who wasn’t involved in the research – fresh eyes, still have some understanding of the user
So at this point, you’re probably thinking, ok, we’re had a taster of some of the cognitive biases that exist. We should be ok as we know what kind of biases we have so we don’t need to counter for them.
But we’re not any less prone to bias than anyone else.
The answer is fairly obvious – 50% of people are more biased than the average, and as you expect, 50% are less biased than the average.
In a study conducted by three social psychologists at Stanford University, they asked more than 600 residents of the US this question. More than 85% of the sample believed that they were less biased than the average American. Only one participant believed that they were more biased than an average American.
This is known as the blind spot bias.
Our ability to perceive bias in others is actually pretty good. Our ability to perceive bias in ourselves is generally pretty bad.
Society teaches us that bias is a bad thing. The word has negative connotations. Most of us would prefer not to see ourselves as the kind of people who do bad things.
This leads us to believe that we must be rational. That our actions and judgements are accurate and therefore without bias. However, it doesn’t stop us from seeing the flaws in others because of their biases.