Considerations When Planning & Conducting a Research Study.
1. Choosing the correct formative usability study setup
2. Recruiting effectively
3. Writing good test tasks
4. Remaining unbiased & facilitating ethically
5. Reporting with metrics
2. 2
CONSIDERATIONS
When Planning & Conducting a Research Study
1.Choosing the correct formative usability study setup
2.Recruiting effectively
3.Writing good test tasks
4.Remaining unbiased & facilitating ethically
5.Reporting with metrics
4. 4
HOW SHOULD THE STUDY BE CONDUCTED?
Moderated Unmoderated
An automated remote test with predefined tasks. No
facilitator is present.
An in person or remote test is when the facilitator and participant
are able to communicate in real time.
Remote testing creates a higher chance for technical issues & more time
needs to be spent filtering for quality. The tradeoff is no scheduling is
needed and no man hours are needed to be spent moderating,
Scheduling is needed and more resourcing. However great moderators
create a comfortable environment where participants can speak freely
without any restraints improving the quality of the session.
5. 5
WHAT ENVIRONMENT SHOULD I TEST IN?
Positives
• Participants are usually more relaxed/authentic
• Able to access assistive tech
Negatives
• Less control over user environment
• Travel time
Positives
• A large pool of participants
• Fast results
Negatives
• Technical challenges
• Interruptions
Positives
• The classic lab allows for full control
• Comfortable viewing
Negatives
• No shows
• Expensive
Remote EnvironmentLab SetupUsers Environment / Portable Setup
7. 7
PARTICIPANT RECRUITMENT
You may want to recruit using an external company or self recruit, either way you
need to consider a couple things when it comes to participant recruitment.
• List demographic facts you know about the user
• List behaviors your users exhibit (What needs does
your product serve)
Are you going to self recruit?
• Utilize networks
• Prepare for no shows (Recruit floaters or over recruit)
• Use social media
What will your recruitment criteria be?
How many participants would you like to recruit?
• Identify any different segments you want to recruit (segments
are different types of user with different goals)
• 5-8 participants per segment for behavior driven studies
Screening Criteria Example
9. 9
HOW DO I WRITE GOOD TEST TASKS?
Rules to follow when writing tasks:
Make sure the tasks are realistic
• Make your tasks goal oriented
• Set the context by writing them as a
scenario
Prioritize testing frequent & critical tasks first
• Identify tasks by gathering analytics around
page visits
• Identify tasks by speaking to support
Make sure tasks have a clear end point
(Success validation)
Be careful not to write tasks in away that leads them
• Avoid being overly instructional
• Do not use navigational labels
• Do not use product specific language
11. 11
HOW CAN I MAKE SURE I AM
FACILITATING ETHICALLY?
Set expectations
• Greet participants
• Tell them how the study will work
• Collect recording consent
• Teach them what thinking aloud means
Adapt to the situation
• The participant is uncomfortable, self doubting or struggles
excessively, it is your responsibility to handle these situations.
• You as a facilitator have an ethical responsibility to the participant and
yourself.
12. 12
HOW DO I REMAIN UNBIASED?
Get Useful Information
• Ask open questions to avoid bias
Watch Body Language
• Mirror your participants, nod along and adjust your moderating style
to accommodate.
• Be aware of what non verbal cues you may be doing
*Facilitation Tips
Answer question with question, wait longer to say something, talk less and be ready for
All sorts of situations. Be careful participants do not get into the mode when they say I
think that people would, have them speak to their experience. Always find out who the
tech guy is before a sessions starts.
14. 14
WHAT METRICS SHOULD I USE?
Metrics that are to be collected and make sense for the study should be
established in your test plan, and clearly communicated to observers before
testing starts.
Completion Rate
• (Use binomial confidence interval - Variation in task completion)
Systems Usability Score (SUS)
Single Ease Question (SEQ)
• Avg experience rating vs expectation rating per task
Microsoft Desirability Toolkit
Track errors found & frequency