Although heat maps are commonly provided by eye-tracking and visualization tools, they have some disadvantages and caution must be taken when using them to draw conclusions on eye tracking results. It is motivated here that visual span is an essential component of visualizations of eye-tracking data and an algorithm is proposed to allow the analyst to set the visual span as a parameter prior to generation of a heat map.
Although the ideas are not novel, the algorithm also indicates how transparency of the heat map can be achieved and how the color gradient can be generated to represent the probability for an object to be observed within the defined visual span. The optional addition of contour lines provides a way to visualize separate intervals in the continuous color map.
2. same as the radius of a fixation, which is the distance from the
centre of a fixation to the POR that is the furthest away. 8
In Figure 2 circles are drawn around fixation centers to indicate 7
the visual field of highest acuity (diameter = 2). Fixations are
shown as dots, with the size of the dots being representative of the 6
duration of a fixation on a linear scale. The 2 visual fields of
Figure 2 might lead an analyst to conclude that the participant did
not see the pieces on a2, b8, g1 or h2. One could rightfully ask 5
why a participant would bother to look at g2.
4
Bearing in mind, however, that a person might be able to observe
objects at 2.5 from the centre of the foveal zone (5 visual span) 3
with 50% acuity [Duchowski 2007], it might be possible that the
viewer perceived the white king and white pawn on g1 and h2
respectively, although he did not look at them directly. Using the 2
algorithm in Figure 1, a heat map was generated that illustrates
this possibility (Figure 3). The same data set was used as in 1
Figure 2 but the visual span (Line 6) was set to 5.
a b c d e f g h
Figure 2. Circles around fixations to indicate the visual field of
highest acuity (diameter = 2).
1. for each pixel of original stimulus
2. Weight[pixel] := 0 //Init pixel weights
3. end for 3.2 Assigning weights to fixations and pixels
//User opted to let the system assign the Analysts should be allowed to select the metric of attention they
//highest pixel weight to weight for red
wish to plot in a heatmap. In other words, they should be able to
4. WtRed := 0;
5. for each fixation select whether they want to base a heat map on the number of
6. for each pixel within the visual span of fixations, the duration of fixations or the number of participants
current fixation who observed a target area [Bojko 2009]. In the case of fixation
7. D := Distance pixel to fixation centre duration, the fixation weight (W) is set to the total duration (in ms)
//p and W determined as described above of the fixation (Figure 1, Line 9). For the number of fixations or
8. p := Probability participant recordings, the fixation weight is set to a value that the
9. W := FixationWeight user may select to ensure smooth coloring, typically W=100.
10. Weight[pixel] := Weight[pixel] + (W*p)
11. end for
12. if Weight[pixel] > WtRed then Each fixation contributes to the total weight of all pixels within its
13. WtRed := Weight[pixel] visual field (Figure 1, Line 10). Since the visual fields of different
14. end if fixations may overlap, it is possible that various fixations can
15. end for contribute to the total weight of a specific pixel. For the duration
and number of fixations all fixations within the visual field of a
16. for each pixel of original stimulus with
pixel contribute to its weight. For the number of participant
Weight[pixel] > 0
//Get respective colour components recordings only the nearest fixation of a specific recording to a
17. r := GetRedValue(Weight[pixel], WtRed) pixel contributes to the total weight of that pixel, provided that the
18. g := GetGreenValue(Weight[pixel], WtRed) pixel falls in the visual field of the fixation.
19. b := GetBlueValue(Weight[pixel], WtRed)
//Add transparency 3.3 Probability
20. Pixel.Red := (T*Pixel.Red + (10-T)*r)/10
21. Pixel.Grn := (T*Pixel.Grn + (10-T)*g)/10
22. Pixel.Blu := (T*Pixel.Blu + (10-T)*b)/10 The probability that an observer will perceive an object during a
fixation, p ε [0,1], decreases as the distance of the object from the
//Draw contours if selected centre of a fixation increases. For each pixel within the visual
23. if Draw contours then span of a fixation, the fixation weight is multiplied with p before
24. c := Contour interval adding it to the total weight of the pixel (Figure 1, Line 10).
25. if Weight[pixel] div c
<> Weight[neighbour pixel] div c then For the algorithm proposed in this paper, a user may select from
//Make the colour of the pixel brown
26. Pixel.Red := 204
three different models for scaling the weight over the visual field,
27. Pixel.Grn := 102 V, i.e. Linear, Gaussian and No scaling. For no scaling p=1 for
28. Pixel.Blue := 0 all pixels within the visual field of a fixation, i.e. the complete
29. end if weight of the fixation contributes to the total weight of all pixels
30. end if within its visual field (example in Figure 3a). For linear scaling
the probability, p, at a distance D from the fixation center is
31. end for
p = 1-D/V where D V.
Figure 1. Algorithm for generation of heat maps
126
3. 8 1.0
Probability to be observed
0.9
7 0.8
0.7
0.6
6 0.5
0.4
0.3
5 0.2
0.1
4 0.0
0.0 0.5 1.0 1.5 2.0 2.5
3 Distance from fixation centre (degrees)
2 Figure 4: Graph of the probability to be observed against distance
from fixation centre. The red curve is for linear scaling
and the blue curve for Gaussian scaling (FWHM = 40%
1 of 5 visual span).
a b c d e f g h
3.4 Color model
8
The RGB color model is an additive model in which red, green,
7 and blue light are added together to reproduce a broad spectrum of
colors. When generating heat maps, each pixel of the stimulus is
assigned an RGB triplet (R, G, B) where each one of the
6 components can be an integer in the range 0 through 255.
5 The algorithm of Figure 1 uses a set of functions, GetRedValue,
GetGreenValue and GetBlueValue (Lines 17, 18 & 19) to return
the intensities for red, green and blue respectively for a specific
4 pixel based on its weight according to the composite linear model
of Figure 5. Other color models, such as CMYK and CIE can also
3 be implemented.
2 3.5 Handling transparency
The analyst has to select a transparency index for the heat map, T
1 ε [0,10], where 0 indicates no transparency (the stimulus is totally
obscured) and 10 indicates complete transparency (heat map
a b c d e f g h
invisible). Every pixel of the original stimulus that is covered by
the heat map, i.e. pixel weight > 0, is edited by decreasing the red
Figure 3a (top): Heat map of the same data set of Figure 2. No
component by the transparency factor, T/10 (Figure 1, Line 20).
scaling. Duration for red=1264 ms.
Thereafter, 1-T/10 of the red component of the heat map at that
Figure 3b (bottom): Heat map of the same data set of Figure 2.
pixel is added to the red component of the pixel of the original
Gaussian scaling (FWHM = 40% of 5
stimulus (Figure 1, Line 20). The green and blue components are
visual span). Duration for red=1264 ms.
edited likewise (Lines 21 & 22). For Figures 3 and 6 the
transparency index was set to 5 while for Figure 7 it was set to 8.
For Gaussian scaling (example in Figure 3b), pixels near the
centre of a fixation are assigned more weight than would have
been the case with linear scaling while those further off are
assigned less weight (Figure 4). For Gaussian scaling,
p = a.e-(D-b)²/2c² , with a=1 and b=0.
The constant c can be expressed in terms of the full width of the
distribution at half maximum (FWHM), i.e.
FWHM = 2.3548 × c [Wikipedia].
If FWHM is defined to represent 0.4 of the maximum visual span,
it follows that Figure 5: A composite linear model for the relationships
c = 0.17 × (visual span). between RGB components and pixel weight.
(Weight for red is set to 100.)
127
4. 8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
a b c d e f g h a b c d e f g h
Figure 6: Heat map of the same data set of Figure 3 but with Figure 7: Heat map with contour lines at intervals of 200 ms.
the duration for red set to 600 ms instead of allowing the Duration for red = 1200 ms; Transparency = 8.
algorithm to allocate the highest aggregate duration to red.
presented that allows analysts to indicate the amount of peripheral
vision that should be accommodated. The algorithm also allows
3.6 Color range the analyst to select the metric of attention together with an
appropriate weight. The drop-off in visual attention can be scaled
Besides the parameters for visual span, the model for scaling the linearly, according to a Gaussian function, or not at all. The
weight and the transparency index, the analyst may decide to set a threshold value for red as well as the transparency can be
weight to be used for red or choose to let the algorithm assign the adjusted. The addition of contour lines provides a means to
highest weight of all pixels (as was done in Figure 1, Lines 4 & visualize areas of equal attention.
12-14). A fixed value is useful if the analyst wants to determine
which areas received a certain minimum amount of attention References
[Bojko 2009]. Figure 6 shows an example of a heat map where the
duration for red was set to 600 ms instead of the 1264 ms that was BLIGNAUT, P.J. 2009. Fixation identification: The optimum
determined by the algorithm and used for Figure 3. threshold for a dispersion algorithm. Attention, Perception and
Psychophysics, 71(4), 881-895.
3.7 Adding contours
BOJKO, A. 2009. Informative or Misleading? Heatmaps
A heat map provides a qualitative overview of viewers' attention. Deconstructed. In J.A. Jacko (ed.) Human-Computer
Although a specific color can be mapped quantitatively in terms Interaction, Part 1, HCII 2009, LNCS 5610, 30-39, Springer-
of the selected metric of attention, it is not easy to communicate Verlag, Berlin.
the value. Contours can be added to separate intervals in the
continuous color map. DUCHOWSKI, A.T. 2007. Eye Tracking Methodology: Theory and
Practice (2nd ed.). Springer, Londen.
Contour lines designate the borders between different intervals of
pixel weight. If two adjacent pixels belong to different contours, EYENAL. 2001. Eyenal (Eye-Analysis) Software Manual. Applied
one of them should be colored differently to indicate a contour Science Group. Retrieved 12 June 2008 from
point (Figure 1, Lines 23 – 30). Figure 7 shows a heat map of the http://www.csbmb.princeton.edu/resources/DocsAndForms/site
same data as in Figure 3 with contour lines at intervals of 200 ms. /forms/Eye_Tracker/Eyenal.pdf
It is believed that the contour lines assist substantially towards the SPAKOV, O. and MINIOTAS, D. 2007. Visualization of eye gaze
interpretation of heatmaps. For example, it is now clear that the data using heat maps. Electronics and Electrical Engineering,
pawn on d4 received about twice as much attention (average 900 2(74), 55-58.
ms) as the pawn on e5 (average 450 ms). The contour lines also
compensate for the loss of color information if the transparency is TOBII TECHNOLOGY AB. 2008. Tobii Studio 1.2 User Manual
increased to improve visibility of the original stimulus. version 1.0. Tobii Technology.
4. Summary WIKIPEDIA. Gaussian function. Retrieved on 30 November 2009
from http://en.wikipedia.org/wiki/Gaussian_function.
Although heat maps are valuable to identify qualitative trends in
eye-tracking data it is important to have control over various WOODING, D.S. 2002. Fixation maps: Quantifying eye-movement
settings to enable sensible comparisons. A simple algorithm was traces. Proc. ETRA 2002, ACM, 31-36.
128