O slideshow foi denunciado.
Seu SlideShare está sendo baixado. ×

How to do usability testing and eye tracking

Carregando em…3

Confira estes a seguir

1 de 54 Anúncio

Mais Conteúdo rRelacionado

Diapositivos para si (20)

Quem viu também gostou (19)


Semelhante a How to do usability testing and eye tracking (20)

Mais de Objective Experience (20)


Mais recentes (20)

How to do usability testing and eye tracking

  1. 1. Project name Client Month Year How to run usability testing with eye tracking www.objectivedigital.com Usability Testing in Sydney Eye tracking eCommerce sites Optimising online marketing
  2. 2. <ul><li>“ Extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.” ISO9241−11 </li></ul><ul><ul><li>Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design? </li></ul></ul><ul><ul><li>Efficiency: Once users have learned the design, how quickly can they perform tasks? </li></ul></ul><ul><ul><li>Memorability: When users return to the design after a period of not using it, how easily can they re-establish proficiency? </li></ul></ul><ul><ul><li>Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors? </li></ul></ul><ul><ul><li>Satisfaction: How pleasant is it to use the design? </li></ul></ul>Some definitions Usability <ul><ul><li>Usability Professionals Association http://www.upassoc.org/usability_resources/about_usability/ </li></ul></ul>
  3. 3. Usability testing: an overview <ul><li>One on one testing with representative users. </li></ul><ul><li>Users complete a set of tasks on a website. </li></ul><ul><li>A facilitator uses a testing script , and observes the users ’ behaviour as they complete the task. </li></ul><ul><li>The focus is on usability . </li></ul><ul><li>Users are interviewed after each task or at the end of the test. </li></ul><ul><li>A combination of retrospective think aloud (RTA) or think aloud (TA) can be used for the tasks. </li></ul><ul><li>A report with observations and usability issues which are translated into actionable recommendations . </li></ul>
  4. 4. Why conduct usability tests? <ul><li>For a website to be successful, it must be usable. Target users must be able to accomplish tasks with little to no difficulty. </li></ul><ul><li>You will discover how real users interact with your website and uncover problems through observation. </li></ul><ul><li>You will almost always learn something about your website which the project team couldn ’ t foresee. </li></ul><ul><li>It ’ s fast and relatively cheap. </li></ul><ul><li>You can solve disputes. </li></ul><ul><li>Saves time and money in redevelopment. </li></ul><ul><li>The development team and the client are too close to the product! </li></ul><ul><li>It will improve the bottom line! </li></ul>
  5. 5. When to test? <ul><li>Test early and test often!!!! </li></ul><ul><li>The earlier you start with testing the easier and less expensive it is to change the system. </li></ul><ul><li>The more you test, the more confident you can be that the end product will actually meet the users ’ needs and the experience is optimised. </li></ul><ul><li>Any testing is better than no testing. </li></ul><ul><li>You can test at any stage of the project. </li></ul><ul><li>Iterative testing substantially improves the usability of websites. </li></ul><ul><li>One study found iterative testing resulted in 30% more task completions, 25% less time to complete the tasks, and 67% greater user satisfaction. </li></ul><ul><ul><li>US Department of Health and Human Services Research-Based Web Design & Usability Guidelines http://www.usability.gov/pdfs/guidelines.html </li></ul></ul>
  6. 6. Test iteratively Test labelling, visual design, some tasks Test some tasks Test labelling, visual design, first click Test all tasks Test all tasks
  7. 7. How many to test? <ul><li>For usability testing, 5-6 or 6-8 people per user group is enough to identify key problems with the site. </li></ul><ul><li>Think do they have the knowledge, skills and abilities to complete the task? Could they use the same script? </li></ul><ul><li>It depends on the test objectives and the complexity of the site. </li></ul><ul><ul><li>A more complex site, e.g. a finance site, may have more user groups, e.g. Investors, Advisors, Journalists or Experts and Novices </li></ul></ul><ul><li>Our view: We have often uncovered all of the major issues by lunch time! </li></ul>
  8. 8. Where to test? <ul><li>A two -s highly recommended to have clients and other stakeholders viewing the usability testing. </li></ul><ul><li>However, you do not need a formal usability lab to do testing. </li></ul><ul><li>Alternatively, you can do effective usability testing in any of these settings: </li></ul><ul><ul><li>a fixed laboratory having two or three connected rooms outfitted with audio-visual equipment </li></ul></ul><ul><ul><li>a conference room, or the user's home or work space, with portable recording equipment </li></ul></ul><ul><ul><li>a conference room, or the user's home or work space, with no recording equipment, as long as someone is observing the user and taking notes </li></ul></ul><ul><ul><li>remotely, with the user in a different location </li></ul></ul><ul><ul><li>remotely, with observer in a different location </li></ul></ul>
  9. 9. Outcomes of usability testing <ul><li>Usability testing provides a deep understanding of usability issues, and their root causes. </li></ul><ul><li>Outcomes are reported as: </li></ul><ul><li>Prioritised usability issues incl. number of users experiencing the problem. </li></ul><ul><li>Actionable recommendations for correcting the issues. </li></ul><ul><li>Issues illustrated with screenshots, videos clips and user quotations. </li></ul><ul><li>What users did on the website, where they went, paths they took. </li></ul><ul><li>Success rates of tasks completed. </li></ul><ul><li>Users ’ self report feedback on their experience when using the site, completing tasks. </li></ul>
  10. 10. What usability testing is NOT? <ul><li>Focus groups are NOT usability testing. Focus groups are about opinions, wants and needs. </li></ul><ul><li>A research study is NOT usability testing. The sample size is typically much smaller. </li></ul><ul><li>A quality assurance test is NOT usability testing. Although the test might uncover bugs on the website, this is not the primary purpose of the test. </li></ul>
  11. 11. Usability Principles
  12. 12. Usability principles in practice <ul><ul><ul><li>A set of usability principles, often described as heuristics , (rules of thumb), are used to evaluate digital products. </li></ul></ul></ul><ul><li>These heuristics have been widely adopted and adapted by the software industry: </li></ul><ul><ul><li>Visibility of system status </li></ul></ul><ul><ul><li>Match between system and the real world </li></ul></ul><ul><ul><li>User control and freedom </li></ul></ul><ul><ul><li>Consistency and standards </li></ul></ul><ul><ul><li>Error prevention </li></ul></ul><ul><ul><li>Recognition rather than recall </li></ul></ul><ul><ul><li>Flexibility and efficiency of use </li></ul></ul><ul><ul><li>Aesthetic and minimalist design </li></ul></ul><ul><ul><li>Help users recognize, diagnose, and recover from errors </li></ul></ul><ul><ul><li>Help and documentation </li></ul></ul><ul><ul><ul><li>References: </li></ul></ul></ul><ul><ul><ul><li>Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proc. ACM CHI'90 Conf. (Seattle, WA, 1-5 April), 249-256 </li></ul></ul></ul><ul><ul><ul><li>Nielsen, J http://www.useit.com/papers/heuristic/heuristic_list.html </li></ul></ul></ul>
  13. 13. Test Plan + Process
  14. 14. Test Plan <ul><li>The test plan is developed with the client, based on: </li></ul><ul><li>Website goals </li></ul><ul><li>What the client wants to test </li></ul><ul><li>Technology issues </li></ul><ul><li>Key users and tasks ( and their availability) </li></ul>
  15. 15. Develop the test plan
  16. 16. Scripts and Scenarios
  17. 17. Typical test flow ---- change graphic, + how many + time
  18. 18. Writing scenarios <ul><li>Relate each task to test objective. </li></ul><ul><li>Select scenarios that rate high on frequency and/or importance. </li></ul><ul><li>Don ’ t make them too complicated. Only short instructions can be retained in short term memory. </li></ul><ul><li>Move from general to more specific. </li></ul><ul><ul><li>Begin with an exploratory task. </li></ul></ul><ul><li>Avoid using the terms on the website – it leads people. </li></ul><ul><ul><li>You want to take out your super NOT You want to consolidate your super </li></ul></ul><ul><li>Make the tasks concrete, use verbs to describe them </li></ul><ul><ul><li>Book a hotel in Rome; Find out if your employer paid your super contribution? </li></ul></ul>
  19. 19. Writing scenarios <ul><ul><li>Try to make a logical flow from the start to finish </li></ul></ul><ul><ul><li>e.g. follow a natural customer journey from search, create account, purchase, check order </li></ul></ul><ul><li>For information tasks, instruct the participant to tell the facilitator when they have found the answer. </li></ul><ul><ul><li>Find a cruise which interests you. Tell the facilitator when you have found it. </li></ul></ul><ul><li>Utilise other communications. </li></ul><ul><ul><li>E.g. You saw a TV ad for XYZ company. Find out more about their product. </li></ul></ul>
  20. 20. Exploratory scenarios <ul><li>eCommerce </li></ul><ul><li>Christmas is coming up. You want to find some Christmas presents for your children/nieces/nephews. </li></ul><ul><li>Insurance website </li></ul><ul><li>You have heard about XYZ on the radio. Browse around the website and find something that interests you. Tell the facilitator when you have found it. </li></ul>
  21. 21. Specific scenarios <ul><li>Asthma not-for-profit </li></ul><ul><li>Your (friend ’ s) child has been recently diagnosed with Asthma and you want to learn about it. Find information on what to do if the child is having an asthma attack. </li></ul><ul><li>eCommerce </li></ul><ul><li>You want to do some more Christmas shopping. Buy a Rose print dress for your sister and a Thomas and Friends Boys Tricycle for your nephew. Have them delivered to your sister ’ s address. </li></ul>
  22. 22. Conducting the test
  23. 23. When facilitating a test… <ul><ul><li>Practice the test! </li></ul></ul><ul><ul><li>Ensure the participant is comfortable with the technology you are asking them to use </li></ul></ul><ul><ul><li>Ask the participant if have any questions before they begin </li></ul></ul><ul><ul><li>Explain the flow of the interview to the participant </li></ul></ul><ul><ul><li>Get a feel for their experience and level of comfort with the internet </li></ul></ul><ul><ul><li>Ask the participant to read out the task to ensure that they are attending to the task details and understand what is being asked of them </li></ul></ul><ul><ul><li>Depending on whether is it a RTA or TA task, your involvement (and how much you talk and when you talk) in the task will vary </li></ul></ul><ul><ul><li>Don ’t explain the interface the participant is working with to avoid getting biased feedback </li></ul></ul><ul><ul><li>Keep the participant focused on the task </li></ul></ul>
  24. 24. When facilitating a test… <ul><ul><li>Leave the participant to make mistakes without helping them out but investigate any issues/mistakes later in the session </li></ul></ul><ul><ul><li>Answer questions with questions to elicit more indepth feedback </li></ul></ul><ul><ul><li>Probe expectations and use less direct questioning (if possible avoid ‘why’), to gather less defensive responses </li></ul></ul><ul><ul><li>‘ Is this what normally happens when you go on an X website?’ </li></ul></ul><ul><ul><li>‘ You seemed surprised/bored/puzzled by that feature….’ </li></ul></ul><ul><ul><li>‘ Can you please explain what is happening now’ </li></ul></ul><ul><ul><li>‘ You seem to be moving the mouse to the right all the time….’ </li></ul></ul><ul><ul><li>‘ I notice you don’t scroll much…, I notice you keep going up and down the page, what are you looking for?’ </li></ul></ul><ul><ul><li>It would be ideal if you are able to have an observer take notes </li></ul></ul>
  25. 25. When facilitating a test… <ul><li>Think Aloud </li></ul><ul><ul><li>Think aloud A is widely used in usability research. During a task, the user is verbalising their actions and thoughts out loud whilst navigating the site. </li></ul></ul><ul><ul><li>The moderators role is then to probe feedback during the test </li></ul></ul><ul><ul><li>This technique gives immediate feedback during a task and means the participant is not relying on memory to provide feedback on things they experienced. </li></ul></ul><ul><li>Retrospective Think Aloud </li></ul><ul><ul><li>With eyetracking hardware and software participants are asked to complete a task silently and on their own. Once the task is complete, the moderator plays back the dynamic eye tracking video while asking the user about their underlying motivation. This allows the participant to get on with their task without being distracted during the task. The feedback is often more thought through when replaying the eye tracking video. </li></ul></ul>
  26. 26. Usability Testing Mistakes <ul><li>Strategic errors – time it right </li></ul><ul><ul><li>Don ’t test too early – you just uncover bugs; Don’t test too late – nothing can change </li></ul></ul><ul><li>Inadequate planning – make sure you have a run through </li></ul><ul><ul><li>Check your test materials against the thing you are testing, especially if it ’s a prototype </li></ul></ul><ul><li>Poor task design </li></ul><ul><ul><li>Ensure your scenarios test core functionality and any areas you have identified as potentially problematic. </li></ul></ul><ul><li>Giving too many clues </li></ul><ul><ul><li>Don ’t give away the answer in the scenario e.g. Register for the site </li></ul></ul><ul><li>Unprofessional demeanor </li></ul><ul><ul><li>Words like Good and Well done may give the impression that you are testing the user, not the site </li></ul></ul><ul><ul><li>Reference: Information and Design Usability Test Mistakes http://www.infodesign.com.au/usabilityresources/usabilitytestingmistakes </li></ul></ul>
  27. 27. Usability Testing Mistakes <ul><li>Not knowing why you are testing </li></ul><ul><li>Testing for too much for too long </li></ul><ul><li>Not bringing the team together </li></ul><ul><li>Not recruiting the right participants </li></ul><ul><li>Not designing the right tasks </li></ul><ul><li>Not facilitating the test effectively </li></ul><ul><li>Not planning how you will distribute the results </li></ul><ul><li>Not iterating to test potential solutions </li></ul>
  28. 28. Analysis and Reporting
  29. 29. What to report? <ul><li>Outcomes are reported as: </li></ul><ul><li>a prioritised list of usability issues </li></ul><ul><li>recommendations for correcting the issues </li></ul><ul><li>Each issue supported by data – what you observed </li></ul><ul><li>general feedback on the users ’ behaviour when interacting on the website </li></ul>
  30. 30. Consider for reporting <ul><li>Which tasks which were difficult to complete? </li></ul><ul><li>Which tasks took a long time to complete? </li></ul><ul><li>What mistakes were consistently made? </li></ul><ul><li>How did they complete the tasks? Was it as expected? </li></ul><ul><li>How easily did users find the key functions? Register, Login, Add to cart, Checkout </li></ul><ul><li>Where did users first click on the home page? </li></ul><ul><li>What interested the users most? </li></ul><ul><li>How did the behaviour of the the user groups differ? </li></ul><ul><li>What did they do which was unexpected? </li></ul><ul><li>What did users notice? </li></ul><ul><li>What did users miss? </li></ul>
  31. 31. How is usability measured? <ul><li>The most common factors measured in usability testing include: </li></ul><ul><ul><li>Effectiveness: A user's ability to successfully use a Web site to find information and accomplish tasks. </li></ul></ul><ul><ul><li>Efficiency: A user's ability to quickly accomplish tasks with ease and without frustration. </li></ul></ul><ul><ul><li>Satisfaction: How much a user enjoys using the Web site. </li></ul></ul><ul><ul><li>Error frequency and severity - How often do users make errors while using the system, how serious are these errors, and how do users recover from these errors? </li></ul></ul><ul><ul><li>Memorability - If a user has used the system before, can he or she remember enough to use it effectively the next time or does the user have to start over again learning everything? </li></ul></ul><ul><li>There are two types of usability metrics that can be captured during a usability test. These metrics include: </li></ul><ul><ul><li>Performance data (what actually happened) </li></ul></ul><ul><ul><li>Preference data (what participants thought) </li></ul></ul>
  32. 32. Usability metrics <ul><li>There are several metrics that you will want to collect and that you identified in the usability test plan: </li></ul><ul><ul><li>Successful Task Completion </li></ul></ul><ul><ul><li>Critical & Non-Critical Errors </li></ul></ul><ul><ul><li>Error-Free Rate </li></ul></ul><ul><ul><li>Time On Task </li></ul></ul><ul><ul><li>Subjective Measures </li></ul></ul><ul><ul><li>Likes, Dislikes and Recommendations </li></ul></ul>
  33. 33. Recommended reading <ul><ul><ul><li>Krug, Steve Don ’t make me think New Riders / 2005 (2nd Edition) </li></ul></ul></ul><ul><li>Nielsen, J Ten Usability Heuristics http://www.useit.com/papers/heuristic/heuristic_list.html </li></ul><ul><li>Wroblewski, Luke Web Form Design: Filling in the Blanks (Rosenfeld Media) 2008 </li></ul>
  34. 34. <ul><ul><li>Testing with users identifies any usability issues, and allows you to watch people as they interact with the site </li></ul></ul><ul><ul><li>T his provides an understanding of why any issues arise </li></ul></ul><ul><ul><li>A representative sample of each user type completes a series of tasks on the website </li></ul></ul><ul><ul><li>The use of Eye Tracking allows the users to undertake the test on their own, without distraction </li></ul></ul><ul><ul><li>Immediately following the test the participant ’ s eye movements video is replayed from the session and they are asked to comment on their experience </li></ul></ul><ul><ul><li>How eye tracking works </li></ul></ul><ul><ul><ul><li>http://www.youtube.com/watch?v=xgRIjrlK1mA </li></ul></ul></ul><ul><ul><ul><li>http://www.youtube.com/user/TobiiEyeTracking#p/u/14/tpLUkKN3AWE </li></ul></ul></ul>Strategy Build + Test Eye Tracking Usability Testing
  35. 35. Eye tracking started over 100 years ago Raymond Dodge ’s Photochronograph (1871-1942) Delabarre ’ s Eye Tracker (1898) Edmund Huey ’ s Eye Tracker (1898)
  36. 36. Eye tracking has come a long way.... And has often been very difficult to use….
  37. 38. From being VERY intrusive and complicated ... (examples of other Eye Trackers used even today ! )
  38. 39. Tobii:ing VERY Simple and Easy!!!
  39. 40. Underlying Technology Corneal reflection system, basic operating principles <ul><li>Tobii eye trackers use near infrared diodes to generate reflection patterns on the corneas of the eyes of the user. </li></ul><ul><li>These reflection patterns, together with other visual information about the person, are collected by image sensors. </li></ul><ul><li>Together with very sophisticated image analysis and mathematics a gaze point on the screen can be calculated, i.e. where the user is looking </li></ul>Pupil Corneal reflection
  40. 41. ...Fixations and Saccades Page A fixation describes points where the eye is relatively still and concentrating directly on a subject. A saccade describes the rapid movements between fixations. A series of fixations and saccades is known as a scan path . In Tobii Studio fixations appear as spots and saccades appear as the lines between fixations
  41. 42. ...Fixations and Saccades <ul><li>FIXATIONS : </li></ul><ul><li>Fixations vary from about 100-600ms </li></ul><ul><li>The majority of information about a scene is acquired during fixations </li></ul><ul><li>The length of a fixation is usually an indication of cognitive processing </li></ul><ul><li>When we fixate only 1-2 degrees of our visual field is seen with 100% accuracy (roughly the size of your thumbnail at arms length) </li></ul><ul><li>The dwell time, or length of a fixation varies depending on the type of stimuli and our familiarity with its content </li></ul><ul><li>Longer dwell times can indicate good, solid engagement </li></ul><ul><li>Lot ’ s of short sporadic fixations can illustrate confusion, random searching or a lack of content deemed interesting or useful </li></ul>Page <ul><li>SACCADES : </li></ul><ul><li>The average length of a saccade is 20-40ms </li></ul><ul><li>Vision is largely suppressed during saccades </li></ul><ul><li>Regressive saccades can reveal confusion or problems with understanding </li></ul><ul><li>During saccadic movement our eyes can be drawn to areas of interest, contrast, shape and colour where we pause, engage and begin to fixate. </li></ul>
  42. 43. ...Your Memory
  43. 44. Personas and Testing Environment <ul><li>Eye tracking is effective because it is non invasive however this massive advantage to studies is often negated by forcing people to be someone they aren ’ t! </li></ul><ul><li>The same thing can apply to test environments and test set-ups. </li></ul>
  44. 45. What Does That Gaze Plot Show? Traditional search strategy on Google – displaying the ‘ golden triangle ’ and then still clicking on the first result – even after a brief foray below the fold – but with no sign of interest or detailed interaction. More ‘ golden triangle ’ style behaviour but this time the participant fails to go below the fold and chooses a video link as their answer. Very detailed search behaviour with 5 times the number of fixations and each result inspected, the participant then moved onto page two of the results.
  45. 46. Mouse Markers <ul><li>A “ mouse marker ” (see example opposite) can be a useful tool when doing web usability testing as it encourages the participant to park their mouse and click on a certain point. </li></ul><ul><li>The benefit of this is that you are controlling the initial ‘ start ’ point for any visual outputs, allowing you to clip data effectively. </li></ul><ul><li>You also reduce the risk of polluting data (especially statistical data) with false ‘ hits ’ as people search for the mouse. If you look below at the examples showing a pair of gaze plots with and without a marker prior to the webpage being displayed. </li></ul>Without a mouse marker With a mouse marker
  46. 47. Fixation Crosses <ul><li>A fixation cross (a basic image as shown opposite) between static stimuli is another way of normalising your data, and is particularly useful when doing print ad testing, or when you may be randomising stimuli. </li></ul><ul><li>Naturally when someone reads text copy, see ’ s an advert or follows a user journey there will be a point where their gaze is carried over to the next stimuli as the image changes. By ‘ resetting ’ each participants vision to the centre of the screen. </li></ul><ul><li>When randomising images this also allows you to remove any random legacy gaze points which could pollute the data sets. </li></ul><ul><li>As everyone's gaze will be central you can then ‘ clip ’ the first 20-30ms to remove the forced hot spot of data, the can also ensure interaction elsewhere on the image is not biased due to an unrealistic peak of interaction... As shown below. </li></ul>Page Data including legacy data from fixation cross Data shown with 30ms clipped from the initial exposure
  47. 48. Heat Maps – Different Metrics Page This heat map was created with the fixation count metric applied. This means that the hot spots indicate where the highest number of fixations were. In this case this does not necessarily gauge the level of engagements on the page, just the level of attraction. The gaze opacity map option has the same functionality but reverses the way the data is displayed. The transparent areas are where the largest numbers of fixations were. Remember to take into account peoples peripheral vision, as the output illustrates the fixations only. <ul><li>Heat Maps are very commonly used outputs from eye tracking but it is important to know that there are three very different metrics that can be applied to the data, and each is suitable for different types of stimuli, methodology and testing. </li></ul><ul><li>The three available metrics are : </li></ul><ul><ul><li>Fixation count : Here the data is collated around the total number of fixations totalled across all the selected recordings, regardless of the duration of each fixation. For example the red areas may indicate 25 fixations. </li></ul></ul><ul><ul><li>Absolute duration : With this filter the data represents the total amount of time spent fixating in an area, across all of the selected recordings. For example the hot spot areas may indicate 9.234 seconds. This metric is useful when each person has seen the stimuli for the same exposure period. </li></ul></ul><ul><ul><li>Relative duration : The filter is ideal for web based testing, or if each participant sees a stimuli for a variable amount of time. Each persons interaction is looked at individually and the data normalised across all the recordings. The results are shown as a percentage of time spent on that stimuli. </li></ul></ul>
  48. 49. Testing eCommerce
  49. 50. Retrospective Think Aloud vs Think Aloud Think aloud testing slows down the users processes due to the additional cognitive workload. Here the user was tasked with searching for a house, and while doing this the moderator asked about how he found the site, the search engine and if he had noticed the advertising. The participant felt obliged to go below the page fold, then they interacted with the advertising after the moderator had mentioned it. You can clearly see the long fixations, and the extensive journey to basically complete a task that was literally right in front of them from the outset. Think aloud also allows users time to find the next step in a process or journey, potentially further damaging findings from testing. We can fixate 5 times or more a second, potentially over 300 fixations in the one minute this user took to complete this page – far more than we can verbalise and rationalise. By allowing the user to complete their task and interview them retrospectively this user (of a similar PC literacy to the previous participant) got on with the task in hand. They completed the task in 32 seconds, just over half the duration of our first example. More tellingly their interactions show that they didn ’ t go below the page, the didn ’ t interact with the advertising and after a very brief scan around the page they determined that their first choice of action was the correct one and searched for their potential new home. When speaking to them after the event during an RTA interview we initially start with the gaze and mouse data hidden – asking them their first impressions and what they thought they did, and then reveal their actual actions and discuss this further with them. The result is far more relevant, realistic and valid data.
  50. 51. Testing Mobile Devices <ul><li>For users with a T or 1750/2150 series eye tracker mobile device testing is possible using the scene camera function in Tobii Studio, and placing a camera under a table as seen in the image opposite. The user then interacts physically with the phone under the table – and views their actions on the screen and is eye tracked as normal. For obvious reasons this is not the most natural way of testing mobile devices! </li></ul><ul><li>Utilising a X series eye tracker, inverted you can test mobile phone devices in a much more relaxed and realistic way. The eye tracker is inverted to adjust the viewing angle of the unit from 0 to 35 degrees into 0 to -35 degrees. This compensates for the fact you are looking underneath the tracker, not over it as you would with a screen based test. The calibration is done on a level plane with the tracker and there is an amount of movement allowed of the handset by the participant. A manual calibration is required, and a strict procedure to invert the calibration needs to be followed. </li></ul>
  51. 52. Testing Physical World Objects Scene Camera For Physical Stimuli <ul><li>To further add to the flexibility of the X series tracker it can also be used with an external ‘ scene ’ camera to test physical objects such as mobile phones, printed material, toys, hand held devices and so on. </li></ul><ul><li>As there is no screen to base a calibration on a grid is printed or drawn onto card or similar and this is then mapped within the software to ensure the eye tracked data is accurate. </li></ul><ul><li>The camera is connected to the PC via a video capture board and live viewing is possible. </li></ul>
  52. 53. Online advertising effectiveness - iMotions
  53. 54. Contact details Objective Digital Pty Ltd ABN 98 123 747 188 Tel 1300 85 80 15 Web ObjectiveDigital.com Blog UsableWorld.com.au James Breeze, CEO Email [email_address] Mob 0410 410 494 Level 10, 220 George Street Sydney NSW 2000 Click to Contact James for a quote today!

Notas do Editor

  • Effectiveness : Accuracy and completeness with which users achieve specified goals. Efficiency : Resources expended in relation to the accuracy and completeness with which users achieve goals. Satisfaction : Freedom from discomfort, and positive attitudes towards the use of the product.
  • Objectivity
  • Reiterate outcomes
  • Basic Operating Principles During tracking, the Tobii eye tracker uses near infrared diodes to generate reflection patterns on the corneas of the eyes of the user. These reflection patterns, together with other visual information about the person, are collected by image sensors. Sophisticated image processing algorithms in the software identify relevant features, including the eyes and the corneal reflection patterns. Complex mathematics is used to calculate the three-dimensional position in space of each eye-ball, and finally the gaze point on the screen, i.e. where the user is looking.
  • Intro James