SlideShare uma empresa Scribd logo
1 de 4
Baixar para ler offline
Visual span and other parameters for the generation of heatmaps
                                                           Pieter Blignaut
                        Department of Computer Science and Informatics, University of the Free State, South Africa

Abstract                                                                                       Miniotas 2007]. Three-dimensional fixation maps can be used to
                                                                                               make the heat map graphically more attractive, but they tend to be
Although heat maps are commonly provided by eye-tracking and                                   less informative since the further parts of the image are shown
visualization tools, they have some disadvantages and caution                                  with less detail and are obscured with peaks at the near end [Tobii
must be taken when using them to draw conclusions on eye                                       Technology 2008; Wooding 2002].
tracking results. It is motivated here that visual span is an
essential component of visualizations of eye-tracking data and an                              Despite their informative nature, heatmaps have disadvantages as
algorithm is proposed to allow the analyst to set the visual span as                           well. Bojko [2009] lists several points of caution when using
a parameter prior to generation of a heat map.                                                 heatmaps and provides a number of guidelines when using
                                                                                               heatmaps. Bojko [2009] and Blignaut [2009] highlight the
Although the ideas are not novel, the algorithm also indicates how                             importance of the algorithm and parameters that are used to
transparency of the heat map can be achieved and how the color                                 identify fixations. Three other aspects that can lead to erroneous
gradient can be generated to represent the probability for an object                           interpretation of eye tracking data must also be considered.
to be observed within the defined visual span. The optional                                    Firstly, if the difference in time spent between areas with little
addition of contour lines provides a way to visualize separate                                 attention and areas with much attention is large, the areas with
intervals in the continuous color map.                                                         little attention might not be colored clearly enough and can be
                                                                                               mistaken as not being observed at all. Secondly, the visual span,
Keywords: Eye-tracking, Visualization, Heatmaps                                                or foveal field of view, of an individual determines the amount of
                                                                                               information that can be observed with peripheral vision. Thirdly,
CR     Categories: H.5.2 [Information Interfaces and                                           the transitions from one color to the next are not sharp and it is
Presentation]: User Interfaces; I6.9c [Simulation and Modeling]:                               difficult to interpret the colors in terms of a numeric value for the
and Visualization: Information visualization)                                                  specific metric of attention that is used.

1.          Introduction                                                                       This paper focuses on heatmaps as a visualization technique for
                                                                                               eye-tracking data. An algorithm to generate heatmaps is
                                                                                               discussed. User-defined parameters, such as visual span,
A fixation may be thought of as the mean x and y position
                                                                                               transparency, color range, and the probability for an object to be
coordinates measured over a minimum period of time during
                                                                                               observed at a specific distance from the centre of a fixation are
which the eye does not move more than a certain maximum
                                                                                               included in this algorithm. The use of contour lines to visualize
amount [Eyenal 2001]. Therefore, the point of regard (POR), i.e.
                                                                                               separate intervals in the continuous color map is proposed.
the gaze coordinates at a specific moment in time, must
continuously remain within a small area for some minimum time
for it to be regarded as a fixation.                                                           2.        Experimental set-up

Several techniques exist in which eye-tracking data can be                                     The stimuli used as example in this paper were taken from a
visualized. Bar graphs, for example, may be used to show the                                   memory recall experiment during which chess players had to look
number of fixations or visitors or average time spent per area of                              at a configuration of chess pieces for 15 seconds whereafter they
interest (AOI). Techniques also exist to overlay the original                                  had to reconstruct the configuration. The recall performance of
stimulus with visualizations in order to guide the analyst towards                             the participants is beyond the scope of this paper and only the eye-
conclusions. Scan paths, for example, may be used to indicate the                              tracking data that was captured during the fifteen seconds
position of fixations with dots that overlie an image of the original                          exposure time was used in the visualizations.
stimulus. The dots may be connected with lines to indicate the
temporal relationship or saccades between fixations while the                                  Data was captured with a Tobii 1750 eye-tracker. The stimuli
radius of the dots can, optionally, represent fixation duration.                               were displayed on a 17" screen with a resolution of 1024×768 at
                                                                                               an eye-screen distance of 600 mm. The stimuli were sized so
Heat maps are semi-transparent, multi-colored layers that cover                                that 1 of visual angle was equivalent to about 33 pixels or 10.5
areas of higher attention with warmer colors and areas of less                                 mm. The individual squares of the chess board spanned about 20
attention with cooler colors. Instead of highlighting the areas of                             mm (2) while each piece was displayed at about 7×8 mm (<1).
higher attention with red, they can be left uncolored while the
areas of lesser attention are dimmed to a degree that corresponds                              3.        Generation of heatmaps
to the amount of attention [Tobii Technology 2008; Spakov and
                                                                                               3.1       Visual span
Copyright © 2010 by the Association for Computing Machinery, Inc.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
                                                                                               Visual span refers to the extremes of the visual field of a viewer,
for commercial advantage and that copies bear this notice and the full citation on the         i.e. the area that can be cognitively observed with a single
first page. Copyrights for components of this work owned by others than ACM must be            fixation. The visual span of a fixation is measured as the distance
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on     (in pixels) from the centre of a fixation to the furthest point where
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
                                                                                               an observer might be able to perceive objects. This is not the
permissions@acm.org.
ETRA 2010, Austin, TX, March 22 – 24, 2010.
© 2010 ACM 978-1-60558-994-7/10/0003 $10.00

                                                                                             125
same as the radius of a fixation, which is the distance from the
centre of a fixation to the POR that is the furthest away.                     8

In Figure 2 circles are drawn around fixation centers to indicate              7
the visual field of highest acuity (diameter = 2). Fixations are
shown as dots, with the size of the dots being representative of the           6
duration of a fixation on a linear scale. The 2 visual fields of
Figure 2 might lead an analyst to conclude that the participant did
not see the pieces on a2, b8, g1 or h2. One could rightfully ask               5
why a participant would bother to look at g2.
                                                                               4
Bearing in mind, however, that a person might be able to observe
objects at 2.5 from the centre of the foveal zone (5 visual span)            3
with 50% acuity [Duchowski 2007], it might be possible that the
viewer perceived the white king and white pawn on g1 and h2
respectively, although he did not look at them directly. Using the             2
algorithm in Figure 1, a heat map was generated that illustrates
this possibility (Figure 3). The same data set was used as in                  1
Figure 2 but the visual span (Line 6) was set to 5.
                                                                                    a      b       c      d      e       f      g      h

                                                                         Figure 2. Circles around fixations to indicate the visual field of
                                                                                   highest acuity (diameter = 2).
 1. for each pixel of original stimulus
 2.   Weight[pixel] := 0 //Init pixel weights
 3. end for                                                              3.2       Assigning weights to fixations and pixels
      //User opted to let the system assign the                          Analysts should be allowed to select the metric of attention they
      //highest pixel weight to weight for red
                                                                         wish to plot in a heatmap. In other words, they should be able to
 4.   WtRed := 0;
 5.   for each fixation                                                  select whether they want to base a heat map on the number of
 6.     for each pixel within the visual span of                         fixations, the duration of fixations or the number of participants
           current fixation                                              who observed a target area [Bojko 2009]. In the case of fixation
 7.      D := Distance pixel to fixation centre                          duration, the fixation weight (W) is set to the total duration (in ms)
         //p and W determined as described above                         of the fixation (Figure 1, Line 9). For the number of fixations or
 8.      p := Probability                                                participant recordings, the fixation weight is set to a value that the
 9.      W := FixationWeight                                             user may select to ensure smooth coloring, typically W=100.
10.      Weight[pixel] := Weight[pixel] + (W*p)
11.     end for
12.     if Weight[pixel] > WtRed then                                    Each fixation contributes to the total weight of all pixels within its
13.       WtRed := Weight[pixel]                                         visual field (Figure 1, Line 10). Since the visual fields of different
14.     end if                                                           fixations may overlap, it is possible that various fixations can
15.   end for                                                            contribute to the total weight of a specific pixel. For the duration
                                                                         and number of fixations all fixations within the visual field of a
16. for each pixel of original stimulus with
                                                                         pixel contribute to its weight. For the number of participant
     Weight[pixel] > 0
     //Get respective colour components                                  recordings only the nearest fixation of a specific recording to a
17. r := GetRedValue(Weight[pixel], WtRed)                               pixel contributes to the total weight of that pixel, provided that the
18. g := GetGreenValue(Weight[pixel], WtRed)                             pixel falls in the visual field of the fixation.
19. b := GetBlueValue(Weight[pixel], WtRed)
     //Add transparency                                                  3.3       Probability
20. Pixel.Red := (T*Pixel.Red + (10-T)*r)/10
21. Pixel.Grn := (T*Pixel.Grn + (10-T)*g)/10
22. Pixel.Blu := (T*Pixel.Blu + (10-T)*b)/10                             The probability that an observer will perceive an object during a
                                                                         fixation, p ε [0,1], decreases as the distance of the object from the
       //Draw contours if selected                                       centre of a fixation increases. For each pixel within the visual
23.    if Draw contours then                                             span of a fixation, the fixation weight is multiplied with p before
24.      c := Contour interval                                           adding it to the total weight of the pixel (Figure 1, Line 10).
25.      if Weight[pixel] div c
          <> Weight[neighbour pixel] div c then                          For the algorithm proposed in this paper, a user may select from
          //Make the colour of the pixel brown
26.         Pixel.Red := 204
                                                                         three different models for scaling the weight over the visual field,
27.         Pixel.Grn := 102                                             V, i.e. Linear, Gaussian and No scaling. For no scaling p=1 for
28.         Pixel.Blue := 0                                              all pixels within the visual field of a fixation, i.e. the complete
29.      end if                                                          weight of the fixation contributes to the total weight of all pixels
30.    end if                                                            within its visual field (example in Figure 3a). For linear scaling
                                                                         the probability, p, at a distance D from the fixation center is
31. end for
                                                                                               p = 1-D/V where D  V.
         Figure 1. Algorithm for generation of heat maps



                                                                       126
8                                                                                                   1.0




                                                                            Probability to be observed
                                                                                                         0.9
     7                                                                                                   0.8
                                                                                                         0.7
                                                                                                         0.6
     6                                                                                                   0.5
                                                                                                         0.4
                                                                                                         0.3
     5                                                                                                   0.2
                                                                                                         0.1
     4                                                                                                   0.0
                                                                                                           0.0      0.5      1.0     1.5      2.0      2.5
     3                                                                                                           Distance from fixation centre (degrees)

     2                                                                Figure 4: Graph of the probability to be observed against distance
                                                                                from fixation centre. The red curve is for linear scaling
                                                                                and the blue curve for Gaussian scaling (FWHM = 40%
     1                                                                          of 5 visual span).
         a      b       c      d       e       f        g   h
                                                                      3.4                                Color model
     8
                                                                      The RGB color model is an additive model in which red, green,
     7                                                                and blue light are added together to reproduce a broad spectrum of
                                                                      colors. When generating heat maps, each pixel of the stimulus is
                                                                      assigned an RGB triplet (R, G, B) where each one of the
     6                                                                components can be an integer in the range 0 through 255.

     5                                                                The algorithm of Figure 1 uses a set of functions, GetRedValue,
                                                                      GetGreenValue and GetBlueValue (Lines 17, 18 & 19) to return
                                                                      the intensities for red, green and blue respectively for a specific
     4                                                                pixel based on its weight according to the composite linear model
                                                                      of Figure 5. Other color models, such as CMYK and CIE can also
     3                                                                be implemented.

     2                                                                3.5                                Handling transparency

                                                                      The analyst has to select a transparency index for the heat map, T
     1                                                                ε [0,10], where 0 indicates no transparency (the stimulus is totally
                                                                      obscured) and 10 indicates complete transparency (heat map
         a      b       c      d       e       f        g   h
                                                                      invisible). Every pixel of the original stimulus that is covered by
                                                                      the heat map, i.e. pixel weight > 0, is edited by decreasing the red
Figure 3a (top): Heat map of the same data set of Figure 2. No
                                                                      component by the transparency factor, T/10 (Figure 1, Line 20).
                 scaling. Duration for red=1264 ms.
                                                                      Thereafter, 1-T/10 of the red component of the heat map at that
Figure 3b (bottom): Heat map of the same data set of Figure 2.
                                                                      pixel is added to the red component of the pixel of the original
                      Gaussian scaling (FWHM = 40% of 5
                                                                      stimulus (Figure 1, Line 20). The green and blue components are
                      visual span). Duration for red=1264 ms.
                                                                      edited likewise (Lines 21 & 22). For Figures 3 and 6 the
                                                                      transparency index was set to 5 while for Figure 7 it was set to 8.
For Gaussian scaling (example in Figure 3b), pixels near the
centre of a fixation are assigned more weight than would have
been the case with linear scaling while those further off are
assigned less weight (Figure 4). For Gaussian scaling,

               p = a.e-(D-b)²/2c² , with a=1 and b=0.

The constant c can be expressed in terms of the full width of the
distribution at half maximum (FWHM), i.e.

              FWHM = 2.3548 × c [Wikipedia].

If FWHM is defined to represent 0.4 of the maximum visual span,
it follows that                                                       Figure 5: A composite linear model for the relationships
                  c = 0.17 × (visual span).                                     between RGB components and pixel weight.
                                                                                (Weight for red is set to 100.)



                                                                    127
8                                                                        8

       7                                                                        7

       6                                                                        6

       5                                                                        5

       4                                                                        4

       3                                                                        3

       2                                                                        2

       1                                                                        1

           a      b      c      d       e      f      g       h                     a      b      c      d       e      f      g      h

     Figure 6: Heat map of the same data set of Figure 3 but with              Figure 7: Heat map with contour lines at intervals of 200 ms.
     the duration for red set to 600 ms instead of allowing the                          Duration for red = 1200 ms; Transparency = 8.
     algorithm to allocate the highest aggregate duration to red.
                                                                           presented that allows analysts to indicate the amount of peripheral
                                                                           vision that should be accommodated. The algorithm also allows
3.6        Color range                                                     the analyst to select the metric of attention together with an
                                                                           appropriate weight. The drop-off in visual attention can be scaled
Besides the parameters for visual span, the model for scaling the          linearly, according to a Gaussian function, or not at all. The
weight and the transparency index, the analyst may decide to set a         threshold value for red as well as the transparency can be
weight to be used for red or choose to let the algorithm assign the        adjusted. The addition of contour lines provides a means to
highest weight of all pixels (as was done in Figure 1, Lines 4 &           visualize areas of equal attention.
12-14). A fixed value is useful if the analyst wants to determine
which areas received a certain minimum amount of attention                 References
[Bojko 2009]. Figure 6 shows an example of a heat map where the
duration for red was set to 600 ms instead of the 1264 ms that was         BLIGNAUT, P.J. 2009. Fixation identification: The optimum
determined by the algorithm and used for Figure 3.                           threshold for a dispersion algorithm. Attention, Perception and
                                                                             Psychophysics, 71(4), 881-895.
3.7        Adding contours
                                                                           BOJKO, A. 2009. Informative or Misleading? Heatmaps
A heat map provides a qualitative overview of viewers' attention.            Deconstructed.      In J.A. Jacko (ed.) Human-Computer
Although a specific color can be mapped quantitatively in terms              Interaction, Part 1, HCII 2009, LNCS 5610, 30-39, Springer-
of the selected metric of attention, it is not easy to communicate           Verlag, Berlin.
the value. Contours can be added to separate intervals in the
continuous color map.                                                      DUCHOWSKI, A.T. 2007. Eye Tracking Methodology: Theory and
                                                                            Practice (2nd ed.). Springer, Londen.
Contour lines designate the borders between different intervals of
pixel weight. If two adjacent pixels belong to different contours,         EYENAL. 2001. Eyenal (Eye-Analysis) Software Manual. Applied
one of them should be colored differently to indicate a contour              Science Group. Retrieved 12 June 2008 from
point (Figure 1, Lines 23 – 30). Figure 7 shows a heat map of the            http://www.csbmb.princeton.edu/resources/DocsAndForms/site
same data as in Figure 3 with contour lines at intervals of 200 ms.          /forms/Eye_Tracker/Eyenal.pdf

It is believed that the contour lines assist substantially towards the     SPAKOV, O. and MINIOTAS, D. 2007. Visualization of eye gaze
interpretation of heatmaps. For example, it is now clear that the            data using heat maps. Electronics and Electrical Engineering,
pawn on d4 received about twice as much attention (average 900              2(74), 55-58.
ms) as the pawn on e5 (average 450 ms). The contour lines also
compensate for the loss of color information if the transparency is        TOBII TECHNOLOGY AB. 2008. Tobii Studio 1.2 User Manual
increased to improve visibility of the original stimulus.                    version 1.0. Tobii Technology.

4.         Summary                                                         WIKIPEDIA. Gaussian function. Retrieved on 30 November 2009
                                                                           from http://en.wikipedia.org/wiki/Gaussian_function.
Although heat maps are valuable to identify qualitative trends in
eye-tracking data it is important to have control over various             WOODING, D.S. 2002. Fixation maps: Quantifying eye-movement
settings to enable sensible comparisons. A simple algorithm was             traces. Proc. ETRA 2002, ACM, 31-36.



                                                                         128

Mais conteúdo relacionado

Mais procurados

Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionzukun
 
Depth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyDepth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyIJAAS Team
 
Image Splicing Detection involving Moment-based Feature Extraction and Classi...
Image Splicing Detection involving Moment-based Feature Extraction and Classi...Image Splicing Detection involving Moment-based Feature Extraction and Classi...
Image Splicing Detection involving Moment-based Feature Extraction and Classi...IDES Editor
 
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...ijtsrd
 
Object recognition
Object recognitionObject recognition
Object recognitionsaniacorreya
 
Detection of Carotid Artery from Pre-Processed Magnetic Resonance Angiogram
Detection of Carotid Artery from Pre-Processed Magnetic Resonance AngiogramDetection of Carotid Artery from Pre-Processed Magnetic Resonance Angiogram
Detection of Carotid Artery from Pre-Processed Magnetic Resonance AngiogramIDES Editor
 
Rear View Virtual Image Displays
Rear View Virtual Image DisplaysRear View Virtual Image Displays
Rear View Virtual Image DisplaysGa S
 
The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)theijes
 
MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)Krishan Pareek
 
26.motion and feature based person tracking
26.motion and feature based person tracking26.motion and feature based person tracking
26.motion and feature based person trackingsajit1975
 
Improving Performance of Texture Based Face Recognition Systems by Segmenting...
Improving Performance of Texture Based Face Recognition Systems by Segmenting...Improving Performance of Texture Based Face Recognition Systems by Segmenting...
Improving Performance of Texture Based Face Recognition Systems by Segmenting...IDES Editor
 
Importance of Mean Shift in Remote Sensing Segmentation
Importance of Mean Shift in Remote Sensing SegmentationImportance of Mean Shift in Remote Sensing Segmentation
Importance of Mean Shift in Remote Sensing SegmentationIOSR Journals
 
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...IJARIDEA Journal
 
Grindinger Group Wise Similarity And Classification Of Aggregate Scanpaths
Grindinger Group Wise Similarity And Classification Of Aggregate ScanpathsGrindinger Group Wise Similarity And Classification Of Aggregate Scanpaths
Grindinger Group Wise Similarity And Classification Of Aggregate ScanpathsKalle
 
Different Image Segmentation Techniques for Dental Image Extraction
Different Image Segmentation Techniques for Dental Image ExtractionDifferent Image Segmentation Techniques for Dental Image Extraction
Different Image Segmentation Techniques for Dental Image ExtractionIJERA Editor
 
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...CSCJournals
 
Goldberg Visual Scanpath Representation
Goldberg Visual Scanpath RepresentationGoldberg Visual Scanpath Representation
Goldberg Visual Scanpath RepresentationKalle
 
Study of Image Inpainting Technique Based on TV Model
Study of Image Inpainting Technique Based on TV ModelStudy of Image Inpainting Technique Based on TV Model
Study of Image Inpainting Technique Based on TV Modelijsrd.com
 

Mais procurados (19)

Brunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer visionBrunelli 2008: template matching techniques in computer vision
Brunelli 2008: template matching techniques in computer vision
 
Depth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyDepth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a Survey
 
Image Splicing Detection involving Moment-based Feature Extraction and Classi...
Image Splicing Detection involving Moment-based Feature Extraction and Classi...Image Splicing Detection involving Moment-based Feature Extraction and Classi...
Image Splicing Detection involving Moment-based Feature Extraction and Classi...
 
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
 
Object recognition
Object recognitionObject recognition
Object recognition
 
Detection of Carotid Artery from Pre-Processed Magnetic Resonance Angiogram
Detection of Carotid Artery from Pre-Processed Magnetic Resonance AngiogramDetection of Carotid Artery from Pre-Processed Magnetic Resonance Angiogram
Detection of Carotid Artery from Pre-Processed Magnetic Resonance Angiogram
 
Rear View Virtual Image Displays
Rear View Virtual Image DisplaysRear View Virtual Image Displays
Rear View Virtual Image Displays
 
The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)The International Journal of Engineering and Science (IJES)
The International Journal of Engineering and Science (IJES)
 
MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)MC0086 Internal Assignment (SMU)
MC0086 Internal Assignment (SMU)
 
26.motion and feature based person tracking
26.motion and feature based person tracking26.motion and feature based person tracking
26.motion and feature based person tracking
 
Improving Performance of Texture Based Face Recognition Systems by Segmenting...
Improving Performance of Texture Based Face Recognition Systems by Segmenting...Improving Performance of Texture Based Face Recognition Systems by Segmenting...
Improving Performance of Texture Based Face Recognition Systems by Segmenting...
 
Importance of Mean Shift in Remote Sensing Segmentation
Importance of Mean Shift in Remote Sensing SegmentationImportance of Mean Shift in Remote Sensing Segmentation
Importance of Mean Shift in Remote Sensing Segmentation
 
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...
 
Grindinger Group Wise Similarity And Classification Of Aggregate Scanpaths
Grindinger Group Wise Similarity And Classification Of Aggregate ScanpathsGrindinger Group Wise Similarity And Classification Of Aggregate Scanpaths
Grindinger Group Wise Similarity And Classification Of Aggregate Scanpaths
 
Different Image Segmentation Techniques for Dental Image Extraction
Different Image Segmentation Techniques for Dental Image ExtractionDifferent Image Segmentation Techniques for Dental Image Extraction
Different Image Segmentation Techniques for Dental Image Extraction
 
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
Computationally Efficient Methods for Sonar Image Denoising using Fractional ...
 
Goldberg Visual Scanpath Representation
Goldberg Visual Scanpath RepresentationGoldberg Visual Scanpath Representation
Goldberg Visual Scanpath Representation
 
Study of Image Inpainting Technique Based on TV Model
Study of Image Inpainting Technique Based on TV ModelStudy of Image Inpainting Technique Based on TV Model
Study of Image Inpainting Technique Based on TV Model
 
Gc2005vk
Gc2005vkGc2005vk
Gc2005vk
 

Destaque

Introductie Ranger Human Capital
Introductie Ranger Human CapitalIntroductie Ranger Human Capital
Introductie Ranger Human CapitalRHC_Nijmegen
 
Istance Designing Gaze Gestures For Gaming An Investigation Of Performance
Istance Designing Gaze Gestures For Gaming An Investigation Of PerformanceIstance Designing Gaze Gestures For Gaming An Investigation Of Performance
Istance Designing Gaze Gestures For Gaming An Investigation Of PerformanceKalle
 
רומא דצמבר 2009
רומא דצמבר 2009  רומא דצמבר 2009
רומא דצמבר 2009 haimkarel
 
Dubbing In Germany - Blog Version
Dubbing In Germany - Blog VersionDubbing In Germany - Blog Version
Dubbing In Germany - Blog Versionmodernromantics
 
08.03.2015-Ubuntu Server Guide 14.04
08.03.2015-Ubuntu Server Guide 14.0408.03.2015-Ubuntu Server Guide 14.04
08.03.2015-Ubuntu Server Guide 14.04El Alex Andrade
 
Bieg Eye And Pointer Coordination In Search And Selection Tasks
Bieg Eye And Pointer Coordination In Search And Selection TasksBieg Eye And Pointer Coordination In Search And Selection Tasks
Bieg Eye And Pointer Coordination In Search And Selection TasksKalle
 
C:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di PlasticaC:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di Plasticatilapia69
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
 
Balanced Scorecard and Strategy Execution
Balanced Scorecard and Strategy ExecutionBalanced Scorecard and Strategy Execution
Balanced Scorecard and Strategy ExecutionShaji Bhaskaran
 
Hello ! for Prep 2
Hello !  for Prep 2Hello !  for Prep 2
Hello ! for Prep 2guest2d48e5
 
Homophones Lesson
Homophones LessonHomophones Lesson
Homophones Lessonjgd7971
 
Kandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKalle
 

Destaque (20)

Introductie Ranger Human Capital
Introductie Ranger Human CapitalIntroductie Ranger Human Capital
Introductie Ranger Human Capital
 
Kennady
KennadyKennady
Kennady
 
Istance Designing Gaze Gestures For Gaming An Investigation Of Performance
Istance Designing Gaze Gestures For Gaming An Investigation Of PerformanceIstance Designing Gaze Gestures For Gaming An Investigation Of Performance
Istance Designing Gaze Gestures For Gaming An Investigation Of Performance
 
רומא דצמבר 2009
רומא דצמבר 2009  רומא דצמבר 2009
רומא דצמבר 2009
 
Odr
OdrOdr
Odr
 
Dubbing In Germany - Blog Version
Dubbing In Germany - Blog VersionDubbing In Germany - Blog Version
Dubbing In Germany - Blog Version
 
Media Evalutaion
Media EvalutaionMedia Evalutaion
Media Evalutaion
 
08.03.2015-Ubuntu Server Guide 14.04
08.03.2015-Ubuntu Server Guide 14.0408.03.2015-Ubuntu Server Guide 14.04
08.03.2015-Ubuntu Server Guide 14.04
 
Bieg Eye And Pointer Coordination In Search And Selection Tasks
Bieg Eye And Pointer Coordination In Search And Selection TasksBieg Eye And Pointer Coordination In Search And Selection Tasks
Bieg Eye And Pointer Coordination In Search And Selection Tasks
 
C:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di PlasticaC:\Fakepath\Sacchetti Di Plastica
C:\Fakepath\Sacchetti Di Plastica
 
Double routing
Double  routingDouble  routing
Double routing
 
Georgia
GeorgiaGeorgia
Georgia
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
 
Balanced Scorecard and Strategy Execution
Balanced Scorecard and Strategy ExecutionBalanced Scorecard and Strategy Execution
Balanced Scorecard and Strategy Execution
 
Hello ! for Prep 2
Hello !  for Prep 2Hello !  for Prep 2
Hello ! for Prep 2
 
Homophones Lesson
Homophones LessonHomophones Lesson
Homophones Lesson
 
Kandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic ScenesKandemir Inferring Object Relevance From Gaze In Dynamic Scenes
Kandemir Inferring Object Relevance From Gaze In Dynamic Scenes
 
Replacing Rounds PSA Stonaker
Replacing Rounds PSA StonakerReplacing Rounds PSA Stonaker
Replacing Rounds PSA Stonaker
 
India Horizontal Plant
India Horizontal PlantIndia Horizontal Plant
India Horizontal Plant
 
Advanced Malware Analysis
Advanced Malware AnalysisAdvanced Malware Analysis
Advanced Malware Analysis
 

Semelhante a Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps

Droege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution ImagesDroege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution ImagesKalle
 
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET Journal
 
CVGIP 2010 Part 3
CVGIP 2010 Part 3CVGIP 2010 Part 3
CVGIP 2010 Part 3Cody Liu
 
Foveated Rendering: An Introduction to Present and Future Research
Foveated Rendering: An Introduction to Present and Future ResearchFoveated Rendering: An Introduction to Present and Future Research
Foveated Rendering: An Introduction to Present and Future ResearchBIPUL MOHANTO [LION]
 
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...IRJET Journal
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Kalle
 
Robust Clustering of Eye Movement Recordings for Quanti
Robust Clustering of Eye Movement Recordings for QuantiRobust Clustering of Eye Movement Recordings for Quanti
Robust Clustering of Eye Movement Recordings for QuantiGiuseppe Fineschi
 
Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...mrgazer
 
IRJET - Review of Various Multi-Focus Image Fusion Methods
IRJET - Review of Various Multi-Focus Image Fusion MethodsIRJET - Review of Various Multi-Focus Image Fusion Methods
IRJET - Review of Various Multi-Focus Image Fusion MethodsIRJET Journal
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
 
Dorr Space Variant Spatio Temporal Filtering Of Video For Gaze Visualization ...
Dorr Space Variant Spatio Temporal Filtering Of Video For Gaze Visualization ...Dorr Space Variant Spatio Temporal Filtering Of Video For Gaze Visualization ...
Dorr Space Variant Spatio Temporal Filtering Of Video For Gaze Visualization ...Kalle
 
Light to-camera communication for context-aware mobile services in exhibits
Light to-camera communication for context-aware mobile services in exhibitsLight to-camera communication for context-aware mobile services in exhibits
Light to-camera communication for context-aware mobile services in exhibitsLihguong Jang
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Kalle
 
IRJET- 3D Object Recognition of Car Image Detection
IRJET-  	  3D Object Recognition of Car Image DetectionIRJET-  	  3D Object Recognition of Car Image Detection
IRJET- 3D Object Recognition of Car Image DetectionIRJET Journal
 
vision correcting display
vision correcting displayvision correcting display
vision correcting displayDisha Tiwari
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
 
Human Depth Perception
Human Depth PerceptionHuman Depth Perception
Human Depth PerceptionIJARIIT
 
Spakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithmsSpakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithmsmrgazer
 

Semelhante a Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps (20)

Droege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution ImagesDroege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution Images
 
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
 
CVGIP 2010 Part 3
CVGIP 2010 Part 3CVGIP 2010 Part 3
CVGIP 2010 Part 3
 
Foveated Rendering: An Introduction to Present and Future Research
Foveated Rendering: An Introduction to Present and Future ResearchFoveated Rendering: An Introduction to Present and Future Research
Foveated Rendering: An Introduction to Present and Future Research
 
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...
ANALYSIS OF LUNG NODULE DETECTION AND STAGE CLASSIFICATION USING FASTER RCNN ...
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
 
Robust Clustering of Eye Movement Recordings for Quanti
Robust Clustering of Eye Movement Recordings for QuantiRobust Clustering of Eye Movement Recordings for Quanti
Robust Clustering of Eye Movement Recordings for Quanti
 
Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...Stellmach.2011.designing gaze supported multimodal interactions for the explo...
Stellmach.2011.designing gaze supported multimodal interactions for the explo...
 
IRJET - Review of Various Multi-Focus Image Fusion Methods
IRJET - Review of Various Multi-Focus Image Fusion MethodsIRJET - Review of Various Multi-Focus Image Fusion Methods
IRJET - Review of Various Multi-Focus Image Fusion Methods
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
 
Dorr Space Variant Spatio Temporal Filtering Of Video For Gaze Visualization ...
Dorr Space Variant Spatio Temporal Filtering Of Video For Gaze Visualization ...Dorr Space Variant Spatio Temporal Filtering Of Video For Gaze Visualization ...
Dorr Space Variant Spatio Temporal Filtering Of Video For Gaze Visualization ...
 
Light to-camera communication for context-aware mobile services in exhibits
Light to-camera communication for context-aware mobile services in exhibitsLight to-camera communication for context-aware mobile services in exhibits
Light to-camera communication for context-aware mobile services in exhibits
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
 
IRJET- 3D Object Recognition of Car Image Detection
IRJET-  	  3D Object Recognition of Car Image DetectionIRJET-  	  3D Object Recognition of Car Image Detection
IRJET- 3D Object Recognition of Car Image Detection
 
Image segmentation using wvlt trnsfrmtn and fuzzy logic. ppt
Image segmentation using wvlt trnsfrmtn and fuzzy logic. pptImage segmentation using wvlt trnsfrmtn and fuzzy logic. ppt
Image segmentation using wvlt trnsfrmtn and fuzzy logic. ppt
 
Tap.12
Tap.12Tap.12
Tap.12
 
vision correcting display
vision correcting displayvision correcting display
vision correcting display
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
 
Human Depth Perception
Human Depth PerceptionHuman Depth Perception
Human Depth Perception
 
Spakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithmsSpakov.2011.comparison of gaze to-objects mapping algorithms
Spakov.2011.comparison of gaze to-objects mapping algorithms
 

Mais de Kalle

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsKalle
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Kalle
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Kalle
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlKalle
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingKalle
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeKalle
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneKalle
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerKalle
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksKalle
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingKalle
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchKalle
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyKalle
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Kalle
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Kalle
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Kalle
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Kalle
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Kalle
 
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Kalle
 
Mulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementMulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementKalle
 

Mais de Kalle (20)

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze Alone
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem Solving
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement Study
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
 
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
Nakayama Estimation Of Viewers Response For Contextual Understanding Of Tasks...
 
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
Nagamatsu User Calibration Free Gaze Tracking With Estimation Of The Horizont...
 
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
Nagamatsu Gaze Estimation Method Based On An Aspherical Model Of The Cornea S...
 
Mulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementMulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head Movement
 

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps

  • 1. Visual span and other parameters for the generation of heatmaps Pieter Blignaut Department of Computer Science and Informatics, University of the Free State, South Africa Abstract Miniotas 2007]. Three-dimensional fixation maps can be used to make the heat map graphically more attractive, but they tend to be Although heat maps are commonly provided by eye-tracking and less informative since the further parts of the image are shown visualization tools, they have some disadvantages and caution with less detail and are obscured with peaks at the near end [Tobii must be taken when using them to draw conclusions on eye Technology 2008; Wooding 2002]. tracking results. It is motivated here that visual span is an essential component of visualizations of eye-tracking data and an Despite their informative nature, heatmaps have disadvantages as algorithm is proposed to allow the analyst to set the visual span as well. Bojko [2009] lists several points of caution when using a parameter prior to generation of a heat map. heatmaps and provides a number of guidelines when using heatmaps. Bojko [2009] and Blignaut [2009] highlight the Although the ideas are not novel, the algorithm also indicates how importance of the algorithm and parameters that are used to transparency of the heat map can be achieved and how the color identify fixations. Three other aspects that can lead to erroneous gradient can be generated to represent the probability for an object interpretation of eye tracking data must also be considered. to be observed within the defined visual span. The optional Firstly, if the difference in time spent between areas with little addition of contour lines provides a way to visualize separate attention and areas with much attention is large, the areas with intervals in the continuous color map. little attention might not be colored clearly enough and can be mistaken as not being observed at all. Secondly, the visual span, Keywords: Eye-tracking, Visualization, Heatmaps or foveal field of view, of an individual determines the amount of information that can be observed with peripheral vision. Thirdly, CR Categories: H.5.2 [Information Interfaces and the transitions from one color to the next are not sharp and it is Presentation]: User Interfaces; I6.9c [Simulation and Modeling]: difficult to interpret the colors in terms of a numeric value for the and Visualization: Information visualization) specific metric of attention that is used. 1. Introduction This paper focuses on heatmaps as a visualization technique for eye-tracking data. An algorithm to generate heatmaps is discussed. User-defined parameters, such as visual span, A fixation may be thought of as the mean x and y position transparency, color range, and the probability for an object to be coordinates measured over a minimum period of time during observed at a specific distance from the centre of a fixation are which the eye does not move more than a certain maximum included in this algorithm. The use of contour lines to visualize amount [Eyenal 2001]. Therefore, the point of regard (POR), i.e. separate intervals in the continuous color map is proposed. the gaze coordinates at a specific moment in time, must continuously remain within a small area for some minimum time for it to be regarded as a fixation. 2. Experimental set-up Several techniques exist in which eye-tracking data can be The stimuli used as example in this paper were taken from a visualized. Bar graphs, for example, may be used to show the memory recall experiment during which chess players had to look number of fixations or visitors or average time spent per area of at a configuration of chess pieces for 15 seconds whereafter they interest (AOI). Techniques also exist to overlay the original had to reconstruct the configuration. The recall performance of stimulus with visualizations in order to guide the analyst towards the participants is beyond the scope of this paper and only the eye- conclusions. Scan paths, for example, may be used to indicate the tracking data that was captured during the fifteen seconds position of fixations with dots that overlie an image of the original exposure time was used in the visualizations. stimulus. The dots may be connected with lines to indicate the temporal relationship or saccades between fixations while the Data was captured with a Tobii 1750 eye-tracker. The stimuli radius of the dots can, optionally, represent fixation duration. were displayed on a 17" screen with a resolution of 1024×768 at an eye-screen distance of 600 mm. The stimuli were sized so Heat maps are semi-transparent, multi-colored layers that cover that 1 of visual angle was equivalent to about 33 pixels or 10.5 areas of higher attention with warmer colors and areas of less mm. The individual squares of the chess board spanned about 20 attention with cooler colors. Instead of highlighting the areas of mm (2) while each piece was displayed at about 7×8 mm (<1). higher attention with red, they can be left uncolored while the areas of lesser attention are dimmed to a degree that corresponds 3. Generation of heatmaps to the amount of attention [Tobii Technology 2008; Spakov and 3.1 Visual span Copyright © 2010 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed Visual span refers to the extremes of the visual field of a viewer, for commercial advantage and that copies bear this notice and the full citation on the i.e. the area that can be cognitively observed with a single first page. Copyrights for components of this work owned by others than ACM must be fixation. The visual span of a fixation is measured as the distance honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on (in pixels) from the centre of a fixation to the furthest point where servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail an observer might be able to perceive objects. This is not the permissions@acm.org. ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 125
  • 2. same as the radius of a fixation, which is the distance from the centre of a fixation to the POR that is the furthest away. 8 In Figure 2 circles are drawn around fixation centers to indicate 7 the visual field of highest acuity (diameter = 2). Fixations are shown as dots, with the size of the dots being representative of the 6 duration of a fixation on a linear scale. The 2 visual fields of Figure 2 might lead an analyst to conclude that the participant did not see the pieces on a2, b8, g1 or h2. One could rightfully ask 5 why a participant would bother to look at g2. 4 Bearing in mind, however, that a person might be able to observe objects at 2.5 from the centre of the foveal zone (5 visual span) 3 with 50% acuity [Duchowski 2007], it might be possible that the viewer perceived the white king and white pawn on g1 and h2 respectively, although he did not look at them directly. Using the 2 algorithm in Figure 1, a heat map was generated that illustrates this possibility (Figure 3). The same data set was used as in 1 Figure 2 but the visual span (Line 6) was set to 5. a b c d e f g h Figure 2. Circles around fixations to indicate the visual field of highest acuity (diameter = 2). 1. for each pixel of original stimulus 2. Weight[pixel] := 0 //Init pixel weights 3. end for 3.2 Assigning weights to fixations and pixels //User opted to let the system assign the Analysts should be allowed to select the metric of attention they //highest pixel weight to weight for red wish to plot in a heatmap. In other words, they should be able to 4. WtRed := 0; 5. for each fixation select whether they want to base a heat map on the number of 6. for each pixel within the visual span of fixations, the duration of fixations or the number of participants current fixation who observed a target area [Bojko 2009]. In the case of fixation 7. D := Distance pixel to fixation centre duration, the fixation weight (W) is set to the total duration (in ms) //p and W determined as described above of the fixation (Figure 1, Line 9). For the number of fixations or 8. p := Probability participant recordings, the fixation weight is set to a value that the 9. W := FixationWeight user may select to ensure smooth coloring, typically W=100. 10. Weight[pixel] := Weight[pixel] + (W*p) 11. end for 12. if Weight[pixel] > WtRed then Each fixation contributes to the total weight of all pixels within its 13. WtRed := Weight[pixel] visual field (Figure 1, Line 10). Since the visual fields of different 14. end if fixations may overlap, it is possible that various fixations can 15. end for contribute to the total weight of a specific pixel. For the duration and number of fixations all fixations within the visual field of a 16. for each pixel of original stimulus with pixel contribute to its weight. For the number of participant Weight[pixel] > 0 //Get respective colour components recordings only the nearest fixation of a specific recording to a 17. r := GetRedValue(Weight[pixel], WtRed) pixel contributes to the total weight of that pixel, provided that the 18. g := GetGreenValue(Weight[pixel], WtRed) pixel falls in the visual field of the fixation. 19. b := GetBlueValue(Weight[pixel], WtRed) //Add transparency 3.3 Probability 20. Pixel.Red := (T*Pixel.Red + (10-T)*r)/10 21. Pixel.Grn := (T*Pixel.Grn + (10-T)*g)/10 22. Pixel.Blu := (T*Pixel.Blu + (10-T)*b)/10 The probability that an observer will perceive an object during a fixation, p ε [0,1], decreases as the distance of the object from the //Draw contours if selected centre of a fixation increases. For each pixel within the visual 23. if Draw contours then span of a fixation, the fixation weight is multiplied with p before 24. c := Contour interval adding it to the total weight of the pixel (Figure 1, Line 10). 25. if Weight[pixel] div c <> Weight[neighbour pixel] div c then For the algorithm proposed in this paper, a user may select from //Make the colour of the pixel brown 26. Pixel.Red := 204 three different models for scaling the weight over the visual field, 27. Pixel.Grn := 102 V, i.e. Linear, Gaussian and No scaling. For no scaling p=1 for 28. Pixel.Blue := 0 all pixels within the visual field of a fixation, i.e. the complete 29. end if weight of the fixation contributes to the total weight of all pixels 30. end if within its visual field (example in Figure 3a). For linear scaling the probability, p, at a distance D from the fixation center is 31. end for p = 1-D/V where D  V. Figure 1. Algorithm for generation of heat maps 126
  • 3. 8 1.0 Probability to be observed 0.9 7 0.8 0.7 0.6 6 0.5 0.4 0.3 5 0.2 0.1 4 0.0 0.0 0.5 1.0 1.5 2.0 2.5 3 Distance from fixation centre (degrees) 2 Figure 4: Graph of the probability to be observed against distance from fixation centre. The red curve is for linear scaling and the blue curve for Gaussian scaling (FWHM = 40% 1 of 5 visual span). a b c d e f g h 3.4 Color model 8 The RGB color model is an additive model in which red, green, 7 and blue light are added together to reproduce a broad spectrum of colors. When generating heat maps, each pixel of the stimulus is assigned an RGB triplet (R, G, B) where each one of the 6 components can be an integer in the range 0 through 255. 5 The algorithm of Figure 1 uses a set of functions, GetRedValue, GetGreenValue and GetBlueValue (Lines 17, 18 & 19) to return the intensities for red, green and blue respectively for a specific 4 pixel based on its weight according to the composite linear model of Figure 5. Other color models, such as CMYK and CIE can also 3 be implemented. 2 3.5 Handling transparency The analyst has to select a transparency index for the heat map, T 1 ε [0,10], where 0 indicates no transparency (the stimulus is totally obscured) and 10 indicates complete transparency (heat map a b c d e f g h invisible). Every pixel of the original stimulus that is covered by the heat map, i.e. pixel weight > 0, is edited by decreasing the red Figure 3a (top): Heat map of the same data set of Figure 2. No component by the transparency factor, T/10 (Figure 1, Line 20). scaling. Duration for red=1264 ms. Thereafter, 1-T/10 of the red component of the heat map at that Figure 3b (bottom): Heat map of the same data set of Figure 2. pixel is added to the red component of the pixel of the original Gaussian scaling (FWHM = 40% of 5 stimulus (Figure 1, Line 20). The green and blue components are visual span). Duration for red=1264 ms. edited likewise (Lines 21 & 22). For Figures 3 and 6 the transparency index was set to 5 while for Figure 7 it was set to 8. For Gaussian scaling (example in Figure 3b), pixels near the centre of a fixation are assigned more weight than would have been the case with linear scaling while those further off are assigned less weight (Figure 4). For Gaussian scaling, p = a.e-(D-b)²/2c² , with a=1 and b=0. The constant c can be expressed in terms of the full width of the distribution at half maximum (FWHM), i.e. FWHM = 2.3548 × c [Wikipedia]. If FWHM is defined to represent 0.4 of the maximum visual span, it follows that Figure 5: A composite linear model for the relationships c = 0.17 × (visual span). between RGB components and pixel weight. (Weight for red is set to 100.) 127
  • 4. 8 8 7 7 6 6 5 5 4 4 3 3 2 2 1 1 a b c d e f g h a b c d e f g h Figure 6: Heat map of the same data set of Figure 3 but with Figure 7: Heat map with contour lines at intervals of 200 ms. the duration for red set to 600 ms instead of allowing the Duration for red = 1200 ms; Transparency = 8. algorithm to allocate the highest aggregate duration to red. presented that allows analysts to indicate the amount of peripheral vision that should be accommodated. The algorithm also allows 3.6 Color range the analyst to select the metric of attention together with an appropriate weight. The drop-off in visual attention can be scaled Besides the parameters for visual span, the model for scaling the linearly, according to a Gaussian function, or not at all. The weight and the transparency index, the analyst may decide to set a threshold value for red as well as the transparency can be weight to be used for red or choose to let the algorithm assign the adjusted. The addition of contour lines provides a means to highest weight of all pixels (as was done in Figure 1, Lines 4 & visualize areas of equal attention. 12-14). A fixed value is useful if the analyst wants to determine which areas received a certain minimum amount of attention References [Bojko 2009]. Figure 6 shows an example of a heat map where the duration for red was set to 600 ms instead of the 1264 ms that was BLIGNAUT, P.J. 2009. Fixation identification: The optimum determined by the algorithm and used for Figure 3. threshold for a dispersion algorithm. Attention, Perception and Psychophysics, 71(4), 881-895. 3.7 Adding contours BOJKO, A. 2009. Informative or Misleading? Heatmaps A heat map provides a qualitative overview of viewers' attention. Deconstructed. In J.A. Jacko (ed.) Human-Computer Although a specific color can be mapped quantitatively in terms Interaction, Part 1, HCII 2009, LNCS 5610, 30-39, Springer- of the selected metric of attention, it is not easy to communicate Verlag, Berlin. the value. Contours can be added to separate intervals in the continuous color map. DUCHOWSKI, A.T. 2007. Eye Tracking Methodology: Theory and Practice (2nd ed.). Springer, Londen. Contour lines designate the borders between different intervals of pixel weight. If two adjacent pixels belong to different contours, EYENAL. 2001. Eyenal (Eye-Analysis) Software Manual. Applied one of them should be colored differently to indicate a contour Science Group. Retrieved 12 June 2008 from point (Figure 1, Lines 23 – 30). Figure 7 shows a heat map of the http://www.csbmb.princeton.edu/resources/DocsAndForms/site same data as in Figure 3 with contour lines at intervals of 200 ms. /forms/Eye_Tracker/Eyenal.pdf It is believed that the contour lines assist substantially towards the SPAKOV, O. and MINIOTAS, D. 2007. Visualization of eye gaze interpretation of heatmaps. For example, it is now clear that the data using heat maps. Electronics and Electrical Engineering, pawn on d4 received about twice as much attention (average 900 2(74), 55-58. ms) as the pawn on e5 (average 450 ms). The contour lines also compensate for the loss of color information if the transparency is TOBII TECHNOLOGY AB. 2008. Tobii Studio 1.2 User Manual increased to improve visibility of the original stimulus. version 1.0. Tobii Technology. 4. Summary WIKIPEDIA. Gaussian function. Retrieved on 30 November 2009 from http://en.wikipedia.org/wiki/Gaussian_function. Although heat maps are valuable to identify qualitative trends in eye-tracking data it is important to have control over various WOODING, D.S. 2002. Fixation maps: Quantifying eye-movement settings to enable sensible comparisons. A simple algorithm was traces. Proc. ETRA 2002, ACM, 31-36. 128