SlideShare uma empresa Scribd logo
1 de 4
Baixar para ler offline
A depth compensation method for cross-ratio based eye tracking
                                                             Flavio L. Coutinho        Carlos H. Morimoto ∗
                                                                     Computer Science Department
                                                                         University of S˜ o Paulo
                                                                                        a
                                                                        {flc,hitoshi}@ime.usp.br


Abstract                                                                                           techniques that compute the 3D line of sight [Beymer and Flickner
                                                                                                   2003; Shih and Liu 2003].
Traditional cross-ratio methods (TCR) project a light pattern and
use invariant properties of projective geometry to estimate the gaze                               The original method proposed in [Yoo et al. 2002] has low accuracy
position. Advantages of the TCR methods include robustness to                                      though, since it is based on a simple geometrical model. Solutions
large head movements and in general requires just a one time per                                   were latter proposed considering more accurate models [Yoo and
user calibration. However, the accuracy of TCR methods decay                                       Chung 2005; Coutinho and Morimoto 2006], but still lacking per-
significantly for head movements along the camera optical axis,                                     formance under larger movements, specially movements in depth.
mainly due to the angular difference between the optical and vi-                                   In this paper we propose a depth compensation cross-ratio (DCR)
sual axis of the eye. In this paper we propose a depth compensation                                method, an extension to these traditional cross-ratio (TCR) meth-
cross-ratio (DCR) method that improves the accuracy of TCR meth-                                   ods that presents better performance in the presence of large head
ods for large head depth variations. Our solution compensates the                                  movements.
angular offset using a 2D onscreen vector computed from a simple                                   The following section presents the CR method in more details and
calibration procedure. The length of the 2D vector, which varies                                   identifies the limitations that motivated this work. In section 3 our
with head distance, is adjusted by a scale factor that is estimated                                proposed solution is introduced and an evaluation of it using syn-
from relative size variations of the corneal reflection pattern. The                                thetic data is carried in section 4. Experimental results are shown
proposed DCR solution was compared to a TCR method using syn-                                      in section 5 and section 6 concludes the paper.
thetic and real data from 2 users. An average improvement of 40%
was observed with synthetic data, and 8% with the real data.
                                                                                                   2    Overview of the CR method
CR Categories: I.2.10 [Vision and Scene Understanding]: Mod-
eling and recovery of physical attributes—Eye Gaze Tracking I.3.6                                  The idea of the CR method [Yoo et al. 2002] is to use four light
[Methodology and Techniques]: Interaction techniques—Gaze In-                                      sources attached to the screen corners to generate four corneal re-
teraction                                                                                          flections. These reflections are considered as projections of the light
                                                                                                   sources and the pupil center P is considered to be the projection
Keywords: single camera eye gaze tracking, free head motion,                                       of the POR, with the cornea center E as the center of projection.
depth estimation, depth compensation                                                               When corneal reflections and the pupil are captured by the camera,
                                                                                                   another projection takes place. Since the composition of two pro-
                                                                                                   jections is also a projection, the cross-ratio property can be applied
1     Introduction                                                                                 to do a reverse projection of the pupil on the image plane into the
                                                                                                   POR on the screen plane.
The cross-ratio (CR) method for remote eye gaze tracking presented
in [Yoo et al. 2002] is an elegant solution to the limitations of the                              2.1 Limitations
pupil-corneal reflection (PCR) technique. In the PCR technique,
gaze is estimated using a mapping from the pupil position (rel-                                    Besides the great potential, large estimation errors are observed by
ative to corneal reflections) to screen coordinates, and this map                                   direct application of the basic CR method described above. The
is obtained by a position dependent calibration procedure. This                                    work in [Guestrin et al. 2008] presents a deep investigation for the
method does not tolerate larger head motions and require frequent                                  reasons for POR estimation error, formally identifying two simpli-
system recalibration. By projecting a rectangular light pattern on                                 fying assumptions used by the basic CR method which are not true
the cornea surface, the CR method estimates the point of regard                                    in practice. The first assumption is that the pupil center and corneal
(POR) using invariant properties of projective geometry. This so-                                  reflection lies on the same plane, whereas pupil location is defined
lution avoids the use of a mapping that is position dependent, al-                                 by its distance to the cornea center and also eye rotation. The sec-
lowing larger head movements. An additional advantage of the CR                                                                −−→
method is that hardware requirements are as simple as the PCR:                                     ond assumption is that the EP vector is the line of sight but, in fact,
just a single non calibrated camera with at least four additional                                  it corresponds the optical axis of the eye, which is deviated from the
light sources attached to the screen corners of known dimensions                                   visual axis (the actual line of sight).
are used. Therefore, it avoids complex hardware setups used by                                     Attempts to improve accuracy were made in [Yoo and Chung 2005;
    ∗ The
                                                                                                   Coutinho and Morimoto 2006]. Both works try to compensate these
                                              ¸˜
         authors would like to thank Coordenacao de Aperfeicoamento de
                                                             ¸
                                                                                                   assumptions by adjusting image points prior to POR calculation. In
                                               ¸˜              `
Pessoal de N´vel Superior (CAPES) and Fundacao de Amparo a Pesquisa
             ı
                                                                                                   [Yoo and Chung 2005] this adjustment is done by scaling of the
do Estado de S˜ o Paulo (FAPESP) for their financial support.
               a
                                                                                                   corneal reflections. Each corneal reflection is scaled by an individ-
Copyright © 2010 by the Association for Computing Machinery, Inc.                                  ual scale parameter obtained via calibration. The purpose of this
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
                                                                                                   scaling is to bring the pupil center and the corneal reflections to a
for commercial advantage and that copies bear this notice and the full citation on the             common plane. No compensation due to the angular deviation of
first page. Copyrights for components of this work owned by others than ACM must be                the visual axis from the optical axis is performed, however. The
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on         work in [Coutinho and Morimoto 2006] extends this solution by
servers, or to redistribute to lists, requires prior specific permission and/or a fee.
Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
                                                                                                   compensating this angular deviation with the addition of a 2D off-
permissions@acm.org.                                                                               set to estimated POR. Scaling of the corneal reflections are also
ETRA 2010, Austin, TX, March 22 – 24, 2010.
© 2010 ACM 978-1-60558-994-7/10/0003 $10.00

                                                                                             137
performed, but a single parameter is used to scale all corneal re-                3.1   Estimating depth change
flections instead. A different calibration procedure was proposed to
estimate both the scaling factor and the 2D offset.                               To be able to scale the offset as described by (3), both the reference
                                                                                  distance dref and the current distance d values are needed. Alter-
This later solution achieves an accuracy of about 1o of visual angle              natively, by observing scale changes of the light pattern reflected on
and tolerates head movements better than the PCR method. It also                  the cornea surface it is possible to measure the ratio d/dref without
has the advantage of just requiring a one time per user calibration.              knowing absolute values of d and dref .
However it was also shown that this solution were not able to com-
pensate for larger user motions, specially depth motions when the
user gets closer or farther to the screen.

3    Depth compensation method
First, it is important to understand why changes in depth increases
estimation error. Figure 1 illustrates the origin of POR estimation
error if no axis offset compensation is applied. Equations (1) and
(2) describe the ∆x and ∆y lengths of the offset vector connecting
uncorrected estimated POR to actual POR. α and β are the pan and
tilt angles of the optical axis relative to the visual axis of the eye. γ
and δ are the pan and tilt angles of the visual axis relative to the z
axis. d is the distance of the eye to the screen along the z axis. We              Figure 2: corneal reflection formation for a single light source.
can see by the equations that when the eye translates in the plane
parallel to the screen the offset size remains the same, but when                 Figure 2 illustrates the image formation of corneal reflection from
the user translates on the direction perpendicular to the screen the              a single light source. Let L1 be the position of light source, O
offset size changes linearly. When the eye rotates the offset size                the camera’s center of projection, E the cornea center and d the
also changes, but a significant contribution comes from the depth                  distance from O to E. Consider the camera with a focal length of
translation.                                                                      fcam , and the cornea as a spherical mirror with focal length fcor .
                                                                                  Consider also that corneal reflection G1 is formed at the focus plane
                                                                                  and is projected to point g1 in the image plane. The heights h and
                                                                                  h (from G1 and g1 , respectively, to camera axis) are given by:

                                                                                                           fcor H       fcam h
                                                                                                      h=          ,h =                               (4)
                                                                                                             d         d − fcor

                                                                                  Substituting equation for h in equation for h we have:

                                                                                                      fcam fcor H       fcam fcor H
                                                                                               h =                 →h ∼                              (5)
                                                                                                      (d − fcor )d          d2

                                                                                  Using the approximation for h (as fcor is much smaller than d)
                                                                                  and considering two different distances dref and d with respective
                                                                                  heights href and h we have that the ratio d/dref can be computed
                                                                                  as follows:
Figure 1: source of POR estimation error due to angular offset
between visual and optical axis of the eye.
                                                                                                             d           href
                                                                                                                 =                                   (6)
                                                                                                            dref          h
                  ∆x = [tan(α + γ) − tan(γ)] d                       (1)
                                                                                  This way, by measuring size variations of the pattern formed by all
                                                                                  four corneal reflections in the image plane, it is possible to esti-
                                                                                  mate relative displacements of the cornea from an initial reference
                          tan(β + δ)   tan(δ)                                     position.
                 ∆y =                −        d                      (2)
                          cos(α + γ)   cos(γ)
                                                                                  For the DCR method the same calibration procedure of [Coutinho
                                                                                  and Morimoto 2006] is used, but in addition to the scale factor
By measuring the translation on the z axis, the 2D offset vector                  and the offset vector parameters, a reference pattern size sizeref is
obtained in calibration (performed at distance dref ) can be scaled               also computed (relative to calibration distance dref , which does not
to its adequate size at current distance d. This solution is not ideal,           need to be known). The pattern size measure is taken as the sum of
since the orientation and length of the offset vector is a function of            the diagonal lengths of the quadrilateral formed by the four corneal
both eye distance and rotation, but it compensates the portion of the             reflections. Since the calibration procedure involves the user look-
the introduced error associated with the translation in the z axis.               ing at different points on the screen, the average pattern size for
The corrected offset vector can thus be computed by:                              all calibration points is taken as sizeref . During eye tracking, the
                                                                                  ratio d/dref is computed using (6) based on current observed pat-
                                  d                                               tern size and sizeref . Using (3), this ratio is then used to calculate
                   −−→
                    −−                   −−−→
                                          −−−
                   of f set =            of f setref                 (3)          the corrected 2D offset vector that will be applied to the estimated
                                 dref                                             POR.


                                                                            138
4     Evaluation using synthetic data
To evaluate the performance of the proposed DCR method, sim-
ulations using synthetic data were carried out. Images from the
LeGrand eye model [Wyszecki and Stiles 1982] generated by ray
tracing were used to simulate the cornea surface, the pupil and
corneal reflections. In this model the cornea is a 0.8 cm radius
sphere and the pupil is a 0.2 cm radius disc located 0.45 cm from
the cornea center. A unified index of refraction of 1.336 is used for
the cornea and aqueous humor. The visual axis, with an angular
offset of 5.5o from the optical axis, is used to direct the eye model
to the observed points. The screen is a 17 inch monitor, which cor-
responds to a rectangle of size 34 cm × 27 cm. The camera was                 Figure 3: depth estimation error (in cm) for the eye at distances
positioned 2 cm bellow the middle point of the bottom edge of the             from 30 cm to 90 cm relative to the computer screen.
screen and the eye was placed along the axis perpendicular to the
screen passing through its center.
                                                                              image. No perturbation was added for the calibration points. This
The eye was positioned at distances from 30 cm to 90 cm, at in-               way the calibrated parameters were the same that were used in the
tervals of 10 cm, from the screen. At each position the visual axis           previous simulation.
of the eye was directed to the center of each cell in a 7 × 7 grid
covering the screen. The distance of 60 cm was used as reference              Average errors are presented in table 2. At 30 cm distance the use
to calibrate the system using points arranged in a 3 × 3 grid. The 9          of the DCR method reduced the average error by 52%, similar to
calibration points consists of the the screen center plus the corners         what was observed in the first simulation. This is explained by the
and the middle points of each edge of the external rectangle of the           fact that when the eye is closer to the screen, the corneal reflection
7 × 7 grid.                                                                   pattern in the images appears with maximum size and the pertur-
                                                                              bation is not so significant compared to farther distances. At 90
The gaze error is defined as the angular difference between the es-            cm a 29% reduction was observed. Average reduction for all dis-
timated POR to the actual POR. In addition to the DCR method,                 tances was of 27.5%, a smaller value than what was observed at
we also simulated the static offset CR method [Coutinho and Mo-               the first simulation, but even under perturbation the DCR method
rimoto 2006] for comparison. Average gaze errors are presented                was able to improve accuracy of POR estimation. Note that as the
in table 1. Observe that at the 60 cm calibration distance the re-            eye moves away from the calibration position, the average error for
sults were the same for both methods and as the distance from the             the dynamic offset method grows at a slower rate than for the static
calibration position increases the difference of the gaze error be-           method.
comes more noticeable. At 30 cm, the average angular error was
reduced by 53% when using the DCR method, while at 90 cm a                            distance    static offset   DCR - dynamic offset
reduction of 63% was observed. The average reduction for all dis-                     30 cm       3.79 ± 1.23        1.81 ± 0.77
tances was 43.24%, showing that our depth estimation method can                       40 cm       2.29 ± 0.78        1.19 ± 0.55
significantly improve POR estimation accuracy.                                         50 cm       1.21 ± 0.64        1.00 ± 0.50
Depth estimation errors using the method proposed in section 3 are                    60 cm       1.08 ± 0.52        1.08 ± 0.53
shown in Figure 3. Since dref = 60 cm, the distance d was es-                         70 cm       1.30 ± 0.53        1.11 ± 0.52
timated using (6) for each eye position. Errors were computed by                      80 cm       1.60 ± 0.73        1.10 ± 0.58
subtracting the estimated distance from the actual value. For all eye                 90 cm       2.15 ± 0.95        1.52 ± 0.60
distances, the depth estimation error observed was below 8%.
                                                                              Table 2: results obtained when corneal reflections were perturbed.
        distance    static offset   DCR - dynamic offset                      Average errors and standard deviation in degrees.
        30 cm       3.82 ± 1.09        1.78 ± 0.65
        40 cm       2.12 ± 0.65        1.00 ± 0.41                            Table 3 shows depth estimation results based on the reference dis-
        50 cm       0.95 ± 0.45        0.66 ± 0.25                            tance of 60 cm. It is possible to see that depth estimation is quite
        60 cm       0.51 ± 0.16        0.51 ± 0.16                            robust to noise. It should be noted, however, that as distance in-
        70 cm       0.85 ± 0.22        0.48 ± 0.20                            creases the pattern formed by the corneal reflections get smaller in
        80 cm       1.33 ± 0.20        0.54 ± 0.23                            the images, making the perturbation in the [−1, 1] pixel range more
        90 cm       1.73 ± 0.18        0.64 ± 0.22                            significant. This is observed by the increase of the standard devia-
                                                                              tion.
Table 1: simulation results. Average errors and standard deviation
in degrees.                                                                                distance     average estimated distance
                                                                                           30 cm            32.6356 ± 0.0702
                                                                                           40 cm            41.4809 ± 0.1545
4.1   Inaccuracies in feature position estimation                                          50 cm            50.7194 ± 0.2817
                                                                                           60 cm            59.9514 ± 0.4202
To observe how errors in feature position estimation affects the                           70 cm            69.4644 ± 0.6870
DCR method, a condition that can occur when using a real gaze                              80 cm            79.0274 ± 1.1479
tracker setup, the same simulations described earlier were repeated                        90 cm            88.7901 ± 1.4968
adding perturbations to the positions of the four corneal reflections
created by the screen light sources. Perturbations were uniformly             Table 3: average depth estimation results and standard deviation
distributed in the range of [−1, 1] pixel, and was added indepen-             (in cm) for inaccurate corneal reflection detection.
dently to the x and y coordinates of each corneal reflection in the


                                                                        139
5   Experimental results                                                               distance     static offset DCR - dynamic offset
                                                                                                            first subject
To observe the real performance of the proposed method, a exper-                       50 cm        0.95 ± 0.54          0.97 ± 0.47
iment was conducted for two subjects. Both were already familiar                       60 cm        0.96 ± 0.43          0.96 ± 0.41
with gaze tracking devices and the experiment was similar to the                       70 cm        0.80 ± 0.31          0.62 ± 0.30
simulations. An important difference from the simulation setup is                      80 cm        1.14 ± 0.43          1.04 ± 0.56
that a fixed focus camera was used. This way, the user-camera dis-                                         second subject
tance remained constant (about 50 cm) when testing different user-                     50 cm        1.33 ± 0.55          1.16 ± 0.53
screen distances. In fact it was the screen that was moved during                      60 cm        0.68 ± 0.44          0.67 ± 0.43
experiments while the user and camera remained at a fixed posi-                         70 cm        1.19 ± 1.07          1.12 ± 1.06
tion. A chin rest was used to keep users at a fixed position and the                    80 cm        1.30 ± 1.10          1.11 ± 0.70
screen was placed at distances of 50, 60, 70 and 80 cm from the
chin rest. Markers over a table were used to place the screen and              Table 4: average gaze estimation error and standard deviation (in
the chin rest at the specified positions. Note that the actual user-            degrees) for both subjects.
screen distance is not the same as the chin rest-screen as the head
usually rests about 2-3 cm closer to the screen. However as head
position is fixed, there will always be a 10 cm translation between
                                                                               visual axis of the eye with a fixed 2D offset vector applied directly
each screen position. The dimension of the used screen is 34 cm ×
                                                                               on the onscreen gaze estimates. As the length of the 2D offset vec-
27 cm.
                                                                               tor varies with head motion, particularly with depth, the use of a
At each screen position the users looked at 49 tests points (center of         fixed vector does not suffice to accurately estimate the POR in the
each cell of a 7×7 grid covering the screen) and a image (640×480              presence of large depth changes. Our proposed method dynami-
pixels) was captured for each of them. Calibration was performed               cally adjusts the vector by a scale factor derived from relative depth
for the distance of 60 cm, and the 9 calibration points were ex-               change estimation. The main contribution of our method to cur-
tracted from the 49 test points in the same way that was done in               rent CR techniques is an 8% accuracy improvement using real user
the simulations. Due to the fact that a fixed focus camera was used,            data and a 40% improvement using synthetic data. Our method is
keeping the user-camera distance constant, the ratio d/dref had to             very simple to compute and does not require any extra hardware. A
be computed differently:                                                       relevant discovery from our experimental data is that eye roll (rota-
                                                                               tion along the z axis), which was not considered in our simulation
                                                                               model, may be a significant source of error. Future work includes
                           d     sizeref                                       further investigation for a solution that dynamically adjusts not only
                               =                                  (7)
                          dref     size                                        the length of the 2D offset vector but also its orientation.

Average angular estimation errors are shown in table 4. When DCR               References
method was used, an average reduction of 7.29% in the average er-
ror was observed for the first subject, while for the second subject            B EYMER , D., AND F LICKNER , M. 2003. Eye gaze tracking using
the reduction was of 8.69%. Depth compensation improves accu-                     an active stereo head. In Proceedings of the IEEE Conference on
racy of POR estimation compared to when no depth compensation                     Computer Vision and Pattern Recognition, vol. 2, 451–458.
is performed. However, observed improvement in accuracy is not
as expressive as was observed in simulations, even in the simulation           C OUTINHO , F., AND M ORIMOTO , C. 2006. Free head motion eye
where inaccurate feature detection was considered.                                gaze tracking using a single camera and multiple light sources.
                                                                                  In Computer Graphics and Image Processing, 2006. SIBGRAPI
Some reasons can be pointed for the reduced performance that was                  ’06. 19th Brazilian Symposium on, 171–178.
observed using a real setup. Inaccuracy in feature detection is one
of them as was verified in section 4.1 and may be larger than the               G UESTRIN , E. D., E IZENMAN , M., K ANG , J. J., AND E IZEN -
[−1, 1] interval considered for simulations. Another reason is that               MAN , E. 2008. Analysis of subject-dependent point-of-gaze
cornea surface of subjects may not fit the spherical model used in                 estimation bias in the cross-ratios method. In ETRA ’08: Pro-
simulations, specially at the region closer to the limbus. There is               ceedings of the 2008 symposium on Eye tracking research &
also the possibility of subject dependent variations of the cornea                applications, ACM, New York, NY, USA, 237–244.
surface. All these reasons end up affecting the pattern of corneal re-
                                                                               S HIH , S., AND L IU , J. 2003. A novel approach to 3d gaze tracking
flections in the images from which are extracted the scale measure
                                                                                  using stereo cameras. IEEE Transactions on systems, man, and
used to estimate depth variation. The method for depth variation
                                                                                  cybernetics - PART B (Jan), 1–12.
calculation is also subject to some error as it is an approximation
that considers a condition of paraxial rays that are not meet in prac-         W YSZECKI , G., AND S TILES , W. S. 1982. Color Science: Con-
tice. The orientation of the screen relative camera and the eye may              cepts and Methods, Quantitative Data and Formulae. John Wi-
also affect this approximation. Finally, we noticed that actual orien-           ley & Sons.
tation of the 2D onscreen offset vector can also change for different
eye rotations and positions but, despite the fact that the offset size         YOO , D. H., AND C HUNG , M. J. 2005. A novel non-intrusive
is adjusted dynamically, its orientation remains fixed.                           eye gaze estimation using cross-ratio under large head motion.
                                                                                 Comput. Vis. Image Underst. 98, 1, 25–51.
6   Conclusion                                                                 YOO , D. H., L EE , B. R., AND C HUNG , M. J. 2002. Non-contact
                                                                                 eye gaze tracking system by mapping of corneal reflections. In
This paper has presented a solution to the problem shared by cur-                FGR ’02: Proceedings of the Fifth IEEE International Confer-
rent cross-ratio (CR) remote eye tracking methods regarding accu-                ence on Automatic Face and Gesture Recognition, IEEE Com-
racy decay with changes in eye position along the optical axis of                puter Society, Washington, DC, USA, 101.
the camera. [Coutinho and Morimoto 2006] have presented a so-
lution that compensates the angular offset between the optical and


                                                                         140

Mais conteúdo relacionado

Mais procurados

Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transf...
Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transf...Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transf...
Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transf...IDES Editor
 
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...IRJET Journal
 
Hybrid Domain based Face Recognition using DWT, FFT and Compressed CLBP
Hybrid Domain based Face Recognition using DWT, FFT and Compressed CLBPHybrid Domain based Face Recognition using DWT, FFT and Compressed CLBP
Hybrid Domain based Face Recognition using DWT, FFT and Compressed CLBPCSCJournals
 
Face recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and poseFace recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and posePvrtechnologies Nellore
 
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION 4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION sipij
 
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters IJERA Editor
 
Effective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye imagesEffective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
 
A novel equalization scheme for the selective enhancement of optical disc and...
A novel equalization scheme for the selective enhancement of optical disc and...A novel equalization scheme for the selective enhancement of optical disc and...
A novel equalization scheme for the selective enhancement of optical disc and...TELKOMNIKA JOURNAL
 
Face recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and poseFace recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and poseI3E Technologies
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Single Image Super Resolution using Interpolation and Discrete Wavelet Transform
Single Image Super Resolution using Interpolation and Discrete Wavelet TransformSingle Image Super Resolution using Interpolation and Discrete Wavelet Transform
Single Image Super Resolution using Interpolation and Discrete Wavelet Transformijtsrd
 
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...Robust Human Tracking Method Based on Apperance and Geometrical Features in N...
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...csandit
 
Goldberg Visual Scanpath Representation
Goldberg Visual Scanpath RepresentationGoldberg Visual Scanpath Representation
Goldberg Visual Scanpath RepresentationKalle
 
A Spectral Domain Local Feature Extraction Algorithm for Face Recognition
A Spectral Domain Local Feature Extraction Algorithm for Face RecognitionA Spectral Domain Local Feature Extraction Algorithm for Face Recognition
A Spectral Domain Local Feature Extraction Algorithm for Face RecognitionCSCJournals
 
A New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow EstimationA New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow EstimationIRJET Journal
 
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION sipij
 
Fcv learn sudderth
Fcv learn sudderthFcv learn sudderth
Fcv learn sudderthzukun
 
Goldberg Scanpath Clustering And Aggregation
Goldberg Scanpath Clustering And AggregationGoldberg Scanpath Clustering And Aggregation
Goldberg Scanpath Clustering And AggregationKalle
 
A Novel Method for Detection of Architectural Distortion in Mammogram
A Novel Method for Detection of Architectural Distortion in MammogramA Novel Method for Detection of Architectural Distortion in Mammogram
A Novel Method for Detection of Architectural Distortion in MammogramIDES Editor
 
Transform Domain Based Iris Recognition using EMD and FFT
Transform Domain Based Iris Recognition using EMD and FFTTransform Domain Based Iris Recognition using EMD and FFT
Transform Domain Based Iris Recognition using EMD and FFTIOSRJVSP
 

Mais procurados (20)

Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transf...
Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transf...Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transf...
Shift Invariant Ear Feature Extraction using Dual Tree Complex Wavelet Transf...
 
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...
Multi Image Deblurring using Complementary Sets of Fluttering Patterns by Mul...
 
Hybrid Domain based Face Recognition using DWT, FFT and Compressed CLBP
Hybrid Domain based Face Recognition using DWT, FFT and Compressed CLBPHybrid Domain based Face Recognition using DWT, FFT and Compressed CLBP
Hybrid Domain based Face Recognition using DWT, FFT and Compressed CLBP
 
Face recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and poseFace recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and pose
 
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION 4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION
4D AUTOMATIC LIP-READING FOR SPEAKER'S FACE IDENTIFCATION
 
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters
 
Effective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye imagesEffective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye images
 
A novel equalization scheme for the selective enhancement of optical disc and...
A novel equalization scheme for the selective enhancement of optical disc and...A novel equalization scheme for the selective enhancement of optical disc and...
A novel equalization scheme for the selective enhancement of optical disc and...
 
Face recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and poseFace recognition across non uniform motion blur, illumination, and pose
Face recognition across non uniform motion blur, illumination, and pose
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Single Image Super Resolution using Interpolation and Discrete Wavelet Transform
Single Image Super Resolution using Interpolation and Discrete Wavelet TransformSingle Image Super Resolution using Interpolation and Discrete Wavelet Transform
Single Image Super Resolution using Interpolation and Discrete Wavelet Transform
 
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...Robust Human Tracking Method Based on Apperance and Geometrical Features in N...
Robust Human Tracking Method Based on Apperance and Geometrical Features in N...
 
Goldberg Visual Scanpath Representation
Goldberg Visual Scanpath RepresentationGoldberg Visual Scanpath Representation
Goldberg Visual Scanpath Representation
 
A Spectral Domain Local Feature Extraction Algorithm for Face Recognition
A Spectral Domain Local Feature Extraction Algorithm for Face RecognitionA Spectral Domain Local Feature Extraction Algorithm for Face Recognition
A Spectral Domain Local Feature Extraction Algorithm for Face Recognition
 
A New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow EstimationA New Approach for video denoising and enhancement using optical flow Estimation
A New Approach for video denoising and enhancement using optical flow Estimation
 
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION
 
Fcv learn sudderth
Fcv learn sudderthFcv learn sudderth
Fcv learn sudderth
 
Goldberg Scanpath Clustering And Aggregation
Goldberg Scanpath Clustering And AggregationGoldberg Scanpath Clustering And Aggregation
Goldberg Scanpath Clustering And Aggregation
 
A Novel Method for Detection of Architectural Distortion in Mammogram
A Novel Method for Detection of Architectural Distortion in MammogramA Novel Method for Detection of Architectural Distortion in Mammogram
A Novel Method for Detection of Architectural Distortion in Mammogram
 
Transform Domain Based Iris Recognition using EMD and FFT
Transform Domain Based Iris Recognition using EMD and FFTTransform Domain Based Iris Recognition using EMD and FFT
Transform Domain Based Iris Recognition using EMD and FFT
 

Destaque

עוטף עזה
עוטף עזהעוטף עזה
עוטף עזהhaimkarel
 
Mulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementMulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementKalle
 
Kehtna koolitus
Kehtna koolitusKehtna koolitus
Kehtna koolitusReigo35
 
Social Studies Melissa (2)[1]
Social Studies Melissa (2)[1]Social Studies Melissa (2)[1]
Social Studies Melissa (2)[1]guest849bb1f
 
עוולות מסחריות דיני נזיקין
עוולות מסחריות דיני נזיקיןעוולות מסחריות דיני נזיקין
עוולות מסחריות דיני נזיקיןhaimkarel
 
Jim Thorpe Viewer Discussion Guide
Jim Thorpe Viewer Discussion GuideJim Thorpe Viewer Discussion Guide
Jim Thorpe Viewer Discussion Guideimroselle
 
ASA RA VPN with AD Authentication
ASA RA VPN with AD AuthenticationASA RA VPN with AD Authentication
ASA RA VPN with AD Authenticationdirflash
 
מצגת על לינוקס
מצגת על לינוקסמצגת על לינוקס
מצגת על לינוקסhaimkarel
 
חוות שבעא יולי 2007 גדעון ביגר
חוות שבעא   יולי 2007   גדעון ביגרחוות שבעא   יולי 2007   גדעון ביגר
חוות שבעא יולי 2007 גדעון ביגרhaimkarel
 
TDR prezentacija - Press konferencija Beograd 19.03.13
TDR prezentacija - Press konferencija Beograd 19.03.13TDR prezentacija - Press konferencija Beograd 19.03.13
TDR prezentacija - Press konferencija Beograd 19.03.13TDR d.o.o Rovinj
 

Destaque (20)

עוטף עזה
עוטף עזהעוטף עזה
עוטף עזה
 
Mulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head MovementMulligan Robust Optical Eye Detection During Head Movement
Mulligan Robust Optical Eye Detection During Head Movement
 
Kehtna koolitus
Kehtna koolitusKehtna koolitus
Kehtna koolitus
 
Social Studies Melissa (2)[1]
Social Studies Melissa (2)[1]Social Studies Melissa (2)[1]
Social Studies Melissa (2)[1]
 
Digit Roman
Digit RomanDigit Roman
Digit Roman
 
עוולות מסחריות דיני נזיקין
עוולות מסחריות דיני נזיקיןעוולות מסחריות דיני נזיקין
עוולות מסחריות דיני נזיקין
 
Zara
ZaraZara
Zara
 
ภาวะโลกร้อน
ภาวะโลกร้อนภาวะโลกร้อน
ภาวะโลกร้อน
 
Jim Thorpe Viewer Discussion Guide
Jim Thorpe Viewer Discussion GuideJim Thorpe Viewer Discussion Guide
Jim Thorpe Viewer Discussion Guide
 
αστροφυσική
αστροφυσικήαστροφυσική
αστροφυσική
 
ASA RA VPN with AD Authentication
ASA RA VPN with AD AuthenticationASA RA VPN with AD Authentication
ASA RA VPN with AD Authentication
 
מצגת על לינוקס
מצגת על לינוקסמצגת על לינוקס
מצגת על לינוקס
 
Spoofing
SpoofingSpoofing
Spoofing
 
Πρόγραμμα Αγωγής Υγείας - 2012-2013
Πρόγραμμα Αγωγής Υγείας - 2012-2013Πρόγραμμα Αγωγής Υγείας - 2012-2013
Πρόγραμμα Αγωγής Υγείας - 2012-2013
 
חוות שבעא יולי 2007 גדעון ביגר
חוות שבעא   יולי 2007   גדעון ביגרחוות שבעא   יולי 2007   גדעון ביגר
חוות שבעא יולי 2007 גדעון ביגר
 
N Ivanov Final
N Ivanov FinalN Ivanov Final
N Ivanov Final
 
TDR prezentacija - Press konferencija Beograd 19.03.13
TDR prezentacija - Press konferencija Beograd 19.03.13TDR prezentacija - Press konferencija Beograd 19.03.13
TDR prezentacija - Press konferencija Beograd 19.03.13
 
Spice
SpiceSpice
Spice
 
Adverts
AdvertsAdverts
Adverts
 
Laudo
LaudoLaudo
Laudo
 

Semelhante a Coutinho A Depth Compensation Method For Cross Ratio Based Eye Tracking

Droege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution ImagesDroege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution ImagesKalle
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Kalle
 
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
 
Fusing stereo images into its equivalent cyclopean view
Fusing stereo images into its equivalent cyclopean viewFusing stereo images into its equivalent cyclopean view
Fusing stereo images into its equivalent cyclopean viewEngineering Publication House
 
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...cscpconf
 
Eye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop EnvironmentEye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop Environment1crore projects
 
Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment University of Moratuwa
 
NEURAL NETWORK APPROACH FOR EYE DETECTION
NEURAL NETWORK APPROACH FOR EYE DETECTIONNEURAL NETWORK APPROACH FOR EYE DETECTION
NEURAL NETWORK APPROACH FOR EYE DETECTIONcscpconf
 
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUESA UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUEScsandit
 
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUESA UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUEScsandit
 
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...CSCJournals
 
An exploration of periocular region with reduced region for authentication re...
An exploration of periocular region with reduced region for authentication re...An exploration of periocular region with reduced region for authentication re...
An exploration of periocular region with reduced region for authentication re...csandit
 
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...cscpconf
 
Medical image segmentation by
Medical image segmentation byMedical image segmentation by
Medical image segmentation bycsandit
 
Medical Image Segmentation by Transferring Ground Truth Segmentation Based up...
Medical Image Segmentation by Transferring Ground Truth Segmentation Based up...Medical Image Segmentation by Transferring Ground Truth Segmentation Based up...
Medical Image Segmentation by Transferring Ground Truth Segmentation Based up...csandit
 
Klingner The Pupillometric Precision Of A Remote Video Eye Tracker
Klingner The Pupillometric Precision Of A Remote Video Eye TrackerKlingner The Pupillometric Precision Of A Remote Video Eye Tracker
Klingner The Pupillometric Precision Of A Remote Video Eye TrackerKalle
 
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAVIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAijcseit
 
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAVIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAijcseit
 
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAVIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAijcseit
 

Semelhante a Coutinho A Depth Compensation Method For Cross Ratio Based Eye Tracking (20)

Droege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution ImagesDroege Pupil Center Detection In Low Resolution Images
Droege Pupil Center Detection In Low Resolution Images
 
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
Takemura Estimating 3 D Point Of Regard And Visualizing Gaze Trajectories Und...
 
Ijetcas14 315
Ijetcas14 315Ijetcas14 315
Ijetcas14 315
 
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...
 
Fusing stereo images into its equivalent cyclopean view
Fusing stereo images into its equivalent cyclopean viewFusing stereo images into its equivalent cyclopean view
Fusing stereo images into its equivalent cyclopean view
 
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...
ROBUST HUMAN TRACKING METHOD BASED ON APPEARANCE AND GEOMETRICAL FEATURES IN ...
 
Eye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop EnvironmentEye Gaze Tracking With a Web Camera in a Desktop Environment
Eye Gaze Tracking With a Web Camera in a Desktop Environment
 
Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment Vision based non-invasive tool for facial swelling assessment
Vision based non-invasive tool for facial swelling assessment
 
NEURAL NETWORK APPROACH FOR EYE DETECTION
NEURAL NETWORK APPROACH FOR EYE DETECTIONNEURAL NETWORK APPROACH FOR EYE DETECTION
NEURAL NETWORK APPROACH FOR EYE DETECTION
 
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUESA UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
 
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUESA UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
A UTOMATIC C OMPUTATION OF CDR U SING F UZZY C LUSTERING T ECHNIQUES
 
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
 
An exploration of periocular region with reduced region for authentication re...
An exploration of periocular region with reduced region for authentication re...An exploration of periocular region with reduced region for authentication re...
An exploration of periocular region with reduced region for authentication re...
 
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...
AN EXPLORATION OF PERIOCULAR REGION WITH REDUCED REGION FOR AUTHENTICATION : ...
 
Medical image segmentation by
Medical image segmentation byMedical image segmentation by
Medical image segmentation by
 
Medical Image Segmentation by Transferring Ground Truth Segmentation Based up...
Medical Image Segmentation by Transferring Ground Truth Segmentation Based up...Medical Image Segmentation by Transferring Ground Truth Segmentation Based up...
Medical Image Segmentation by Transferring Ground Truth Segmentation Based up...
 
Klingner The Pupillometric Precision Of A Remote Video Eye Tracker
Klingner The Pupillometric Precision Of A Remote Video Eye TrackerKlingner The Pupillometric Precision Of A Remote Video Eye Tracker
Klingner The Pupillometric Precision Of A Remote Video Eye Tracker
 
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAVIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
 
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAVIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
 
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMAVIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
VIRTUAL VIEWPOINT THREE-DIMENSIONAL PANORAMA
 

Mais de Kalle

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsKalle
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Kalle
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Kalle
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Kalle
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Kalle
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlKalle
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Kalle
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingKalle
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeKalle
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Kalle
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneKalle
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerKalle
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksKalle
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingKalle
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchKalle
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyKalle
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Kalle
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Kalle
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Kalle
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Kalle
 

Mais de Kalle (20)

Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of HeatmapsBlignaut Visual Span And Other Parameters For The Generation Of Heatmaps
Blignaut Visual Span And Other Parameters For The Generation Of Heatmaps
 
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
Zhang Eye Movement As An Interaction Mechanism For Relevance Feedback In A Co...
 
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
Yamamoto Development Of Eye Tracking Pen Display Based On Stereo Bright Pupil...
 
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
Wastlund What You See Is Where You Go Testing A Gaze Driven Power Wheelchair ...
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
 
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze ControlUrbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
Urbina Pies With Ey Es The Limits Of Hierarchical Pie Menus In Gaze Control
 
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
Urbina Alternatives To Single Character Entry And Dwell Time Selection On Eye...
 
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic TrainingTien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
Tien Measuring Situation Awareness Of Surgeons In Laparoscopic Training
 
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser OphthalmoscopeStevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
Stevenson Eye Tracking With The Adaptive Optics Scanning Laser Ophthalmoscope
 
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
Stellmach Advanced Gaze Visualizations For Three Dimensional Virtual Environm...
 
Skovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze AloneSkovsgaard Small Target Selection With Gaze Alone
Skovsgaard Small Target Selection With Gaze Alone
 
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze TrackerSan Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
San Agustin Evaluation Of A Low Cost Open Source Gaze Tracker
 
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural TasksRyan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
Ryan Match Moving For Area Based Analysis Of Eye Movements In Natural Tasks
 
Rosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem SolvingRosengrant Gaze Scribing In Physics Problem Solving
Rosengrant Gaze Scribing In Physics Problem Solving
 
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual SearchQvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
Qvarfordt Understanding The Benefits Of Gaze Enhanced Visual Search
 
Prats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement StudyPrats Interpretation Of Geometric Shapes An Eye Movement Study
Prats Interpretation Of Geometric Shapes An Eye Movement Study
 
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
Porta Ce Cursor A Contextual Eye Cursor For General Pointing In Windows Envir...
 
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
 
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
Park Quantification Of Aesthetic Viewing Using Eye Tracking Technology The In...
 
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
Palinko Estimating Cognitive Load Using Remote Eye Tracking In A Driving Simu...
 

Coutinho A Depth Compensation Method For Cross Ratio Based Eye Tracking

  • 1. A depth compensation method for cross-ratio based eye tracking Flavio L. Coutinho Carlos H. Morimoto ∗ Computer Science Department University of S˜ o Paulo a {flc,hitoshi}@ime.usp.br Abstract techniques that compute the 3D line of sight [Beymer and Flickner 2003; Shih and Liu 2003]. Traditional cross-ratio methods (TCR) project a light pattern and use invariant properties of projective geometry to estimate the gaze The original method proposed in [Yoo et al. 2002] has low accuracy position. Advantages of the TCR methods include robustness to though, since it is based on a simple geometrical model. Solutions large head movements and in general requires just a one time per were latter proposed considering more accurate models [Yoo and user calibration. However, the accuracy of TCR methods decay Chung 2005; Coutinho and Morimoto 2006], but still lacking per- significantly for head movements along the camera optical axis, formance under larger movements, specially movements in depth. mainly due to the angular difference between the optical and vi- In this paper we propose a depth compensation cross-ratio (DCR) sual axis of the eye. In this paper we propose a depth compensation method, an extension to these traditional cross-ratio (TCR) meth- cross-ratio (DCR) method that improves the accuracy of TCR meth- ods that presents better performance in the presence of large head ods for large head depth variations. Our solution compensates the movements. angular offset using a 2D onscreen vector computed from a simple The following section presents the CR method in more details and calibration procedure. The length of the 2D vector, which varies identifies the limitations that motivated this work. In section 3 our with head distance, is adjusted by a scale factor that is estimated proposed solution is introduced and an evaluation of it using syn- from relative size variations of the corneal reflection pattern. The thetic data is carried in section 4. Experimental results are shown proposed DCR solution was compared to a TCR method using syn- in section 5 and section 6 concludes the paper. thetic and real data from 2 users. An average improvement of 40% was observed with synthetic data, and 8% with the real data. 2 Overview of the CR method CR Categories: I.2.10 [Vision and Scene Understanding]: Mod- eling and recovery of physical attributes—Eye Gaze Tracking I.3.6 The idea of the CR method [Yoo et al. 2002] is to use four light [Methodology and Techniques]: Interaction techniques—Gaze In- sources attached to the screen corners to generate four corneal re- teraction flections. These reflections are considered as projections of the light sources and the pupil center P is considered to be the projection Keywords: single camera eye gaze tracking, free head motion, of the POR, with the cornea center E as the center of projection. depth estimation, depth compensation When corneal reflections and the pupil are captured by the camera, another projection takes place. Since the composition of two pro- jections is also a projection, the cross-ratio property can be applied 1 Introduction to do a reverse projection of the pupil on the image plane into the POR on the screen plane. The cross-ratio (CR) method for remote eye gaze tracking presented in [Yoo et al. 2002] is an elegant solution to the limitations of the 2.1 Limitations pupil-corneal reflection (PCR) technique. In the PCR technique, gaze is estimated using a mapping from the pupil position (rel- Besides the great potential, large estimation errors are observed by ative to corneal reflections) to screen coordinates, and this map direct application of the basic CR method described above. The is obtained by a position dependent calibration procedure. This work in [Guestrin et al. 2008] presents a deep investigation for the method does not tolerate larger head motions and require frequent reasons for POR estimation error, formally identifying two simpli- system recalibration. By projecting a rectangular light pattern on fying assumptions used by the basic CR method which are not true the cornea surface, the CR method estimates the point of regard in practice. The first assumption is that the pupil center and corneal (POR) using invariant properties of projective geometry. This so- reflection lies on the same plane, whereas pupil location is defined lution avoids the use of a mapping that is position dependent, al- by its distance to the cornea center and also eye rotation. The sec- lowing larger head movements. An additional advantage of the CR −−→ method is that hardware requirements are as simple as the PCR: ond assumption is that the EP vector is the line of sight but, in fact, just a single non calibrated camera with at least four additional it corresponds the optical axis of the eye, which is deviated from the light sources attached to the screen corners of known dimensions visual axis (the actual line of sight). are used. Therefore, it avoids complex hardware setups used by Attempts to improve accuracy were made in [Yoo and Chung 2005; ∗ The Coutinho and Morimoto 2006]. Both works try to compensate these ¸˜ authors would like to thank Coordenacao de Aperfeicoamento de ¸ assumptions by adjusting image points prior to POR calculation. In ¸˜ ` Pessoal de N´vel Superior (CAPES) and Fundacao de Amparo a Pesquisa ı [Yoo and Chung 2005] this adjustment is done by scaling of the do Estado de S˜ o Paulo (FAPESP) for their financial support. a corneal reflections. Each corneal reflection is scaled by an individ- Copyright © 2010 by the Association for Computing Machinery, Inc. ual scale parameter obtained via calibration. The purpose of this Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed scaling is to bring the pupil center and the corneal reflections to a for commercial advantage and that copies bear this notice and the full citation on the common plane. No compensation due to the angular deviation of first page. Copyrights for components of this work owned by others than ACM must be the visual axis from the optical axis is performed, however. The honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on work in [Coutinho and Morimoto 2006] extends this solution by servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail compensating this angular deviation with the addition of a 2D off- permissions@acm.org. set to estimated POR. Scaling of the corneal reflections are also ETRA 2010, Austin, TX, March 22 – 24, 2010. © 2010 ACM 978-1-60558-994-7/10/0003 $10.00 137
  • 2. performed, but a single parameter is used to scale all corneal re- 3.1 Estimating depth change flections instead. A different calibration procedure was proposed to estimate both the scaling factor and the 2D offset. To be able to scale the offset as described by (3), both the reference distance dref and the current distance d values are needed. Alter- This later solution achieves an accuracy of about 1o of visual angle natively, by observing scale changes of the light pattern reflected on and tolerates head movements better than the PCR method. It also the cornea surface it is possible to measure the ratio d/dref without has the advantage of just requiring a one time per user calibration. knowing absolute values of d and dref . However it was also shown that this solution were not able to com- pensate for larger user motions, specially depth motions when the user gets closer or farther to the screen. 3 Depth compensation method First, it is important to understand why changes in depth increases estimation error. Figure 1 illustrates the origin of POR estimation error if no axis offset compensation is applied. Equations (1) and (2) describe the ∆x and ∆y lengths of the offset vector connecting uncorrected estimated POR to actual POR. α and β are the pan and tilt angles of the optical axis relative to the visual axis of the eye. γ and δ are the pan and tilt angles of the visual axis relative to the z axis. d is the distance of the eye to the screen along the z axis. We Figure 2: corneal reflection formation for a single light source. can see by the equations that when the eye translates in the plane parallel to the screen the offset size remains the same, but when Figure 2 illustrates the image formation of corneal reflection from the user translates on the direction perpendicular to the screen the a single light source. Let L1 be the position of light source, O offset size changes linearly. When the eye rotates the offset size the camera’s center of projection, E the cornea center and d the also changes, but a significant contribution comes from the depth distance from O to E. Consider the camera with a focal length of translation. fcam , and the cornea as a spherical mirror with focal length fcor . Consider also that corneal reflection G1 is formed at the focus plane and is projected to point g1 in the image plane. The heights h and h (from G1 and g1 , respectively, to camera axis) are given by: fcor H fcam h h= ,h = (4) d d − fcor Substituting equation for h in equation for h we have: fcam fcor H fcam fcor H h = →h ∼ (5) (d − fcor )d d2 Using the approximation for h (as fcor is much smaller than d) and considering two different distances dref and d with respective heights href and h we have that the ratio d/dref can be computed as follows: Figure 1: source of POR estimation error due to angular offset between visual and optical axis of the eye. d href = (6) dref h ∆x = [tan(α + γ) − tan(γ)] d (1) This way, by measuring size variations of the pattern formed by all four corneal reflections in the image plane, it is possible to esti- mate relative displacements of the cornea from an initial reference tan(β + δ) tan(δ) position. ∆y = − d (2) cos(α + γ) cos(γ) For the DCR method the same calibration procedure of [Coutinho and Morimoto 2006] is used, but in addition to the scale factor By measuring the translation on the z axis, the 2D offset vector and the offset vector parameters, a reference pattern size sizeref is obtained in calibration (performed at distance dref ) can be scaled also computed (relative to calibration distance dref , which does not to its adequate size at current distance d. This solution is not ideal, need to be known). The pattern size measure is taken as the sum of since the orientation and length of the offset vector is a function of the diagonal lengths of the quadrilateral formed by the four corneal both eye distance and rotation, but it compensates the portion of the reflections. Since the calibration procedure involves the user look- the introduced error associated with the translation in the z axis. ing at different points on the screen, the average pattern size for The corrected offset vector can thus be computed by: all calibration points is taken as sizeref . During eye tracking, the ratio d/dref is computed using (6) based on current observed pat- d tern size and sizeref . Using (3), this ratio is then used to calculate −−→ −− −−−→ −−− of f set = of f setref (3) the corrected 2D offset vector that will be applied to the estimated dref POR. 138
  • 3. 4 Evaluation using synthetic data To evaluate the performance of the proposed DCR method, sim- ulations using synthetic data were carried out. Images from the LeGrand eye model [Wyszecki and Stiles 1982] generated by ray tracing were used to simulate the cornea surface, the pupil and corneal reflections. In this model the cornea is a 0.8 cm radius sphere and the pupil is a 0.2 cm radius disc located 0.45 cm from the cornea center. A unified index of refraction of 1.336 is used for the cornea and aqueous humor. The visual axis, with an angular offset of 5.5o from the optical axis, is used to direct the eye model to the observed points. The screen is a 17 inch monitor, which cor- responds to a rectangle of size 34 cm × 27 cm. The camera was Figure 3: depth estimation error (in cm) for the eye at distances positioned 2 cm bellow the middle point of the bottom edge of the from 30 cm to 90 cm relative to the computer screen. screen and the eye was placed along the axis perpendicular to the screen passing through its center. image. No perturbation was added for the calibration points. This The eye was positioned at distances from 30 cm to 90 cm, at in- way the calibrated parameters were the same that were used in the tervals of 10 cm, from the screen. At each position the visual axis previous simulation. of the eye was directed to the center of each cell in a 7 × 7 grid covering the screen. The distance of 60 cm was used as reference Average errors are presented in table 2. At 30 cm distance the use to calibrate the system using points arranged in a 3 × 3 grid. The 9 of the DCR method reduced the average error by 52%, similar to calibration points consists of the the screen center plus the corners what was observed in the first simulation. This is explained by the and the middle points of each edge of the external rectangle of the fact that when the eye is closer to the screen, the corneal reflection 7 × 7 grid. pattern in the images appears with maximum size and the pertur- bation is not so significant compared to farther distances. At 90 The gaze error is defined as the angular difference between the es- cm a 29% reduction was observed. Average reduction for all dis- timated POR to the actual POR. In addition to the DCR method, tances was of 27.5%, a smaller value than what was observed at we also simulated the static offset CR method [Coutinho and Mo- the first simulation, but even under perturbation the DCR method rimoto 2006] for comparison. Average gaze errors are presented was able to improve accuracy of POR estimation. Note that as the in table 1. Observe that at the 60 cm calibration distance the re- eye moves away from the calibration position, the average error for sults were the same for both methods and as the distance from the the dynamic offset method grows at a slower rate than for the static calibration position increases the difference of the gaze error be- method. comes more noticeable. At 30 cm, the average angular error was reduced by 53% when using the DCR method, while at 90 cm a distance static offset DCR - dynamic offset reduction of 63% was observed. The average reduction for all dis- 30 cm 3.79 ± 1.23 1.81 ± 0.77 tances was 43.24%, showing that our depth estimation method can 40 cm 2.29 ± 0.78 1.19 ± 0.55 significantly improve POR estimation accuracy. 50 cm 1.21 ± 0.64 1.00 ± 0.50 Depth estimation errors using the method proposed in section 3 are 60 cm 1.08 ± 0.52 1.08 ± 0.53 shown in Figure 3. Since dref = 60 cm, the distance d was es- 70 cm 1.30 ± 0.53 1.11 ± 0.52 timated using (6) for each eye position. Errors were computed by 80 cm 1.60 ± 0.73 1.10 ± 0.58 subtracting the estimated distance from the actual value. For all eye 90 cm 2.15 ± 0.95 1.52 ± 0.60 distances, the depth estimation error observed was below 8%. Table 2: results obtained when corneal reflections were perturbed. distance static offset DCR - dynamic offset Average errors and standard deviation in degrees. 30 cm 3.82 ± 1.09 1.78 ± 0.65 40 cm 2.12 ± 0.65 1.00 ± 0.41 Table 3 shows depth estimation results based on the reference dis- 50 cm 0.95 ± 0.45 0.66 ± 0.25 tance of 60 cm. It is possible to see that depth estimation is quite 60 cm 0.51 ± 0.16 0.51 ± 0.16 robust to noise. It should be noted, however, that as distance in- 70 cm 0.85 ± 0.22 0.48 ± 0.20 creases the pattern formed by the corneal reflections get smaller in 80 cm 1.33 ± 0.20 0.54 ± 0.23 the images, making the perturbation in the [−1, 1] pixel range more 90 cm 1.73 ± 0.18 0.64 ± 0.22 significant. This is observed by the increase of the standard devia- tion. Table 1: simulation results. Average errors and standard deviation in degrees. distance average estimated distance 30 cm 32.6356 ± 0.0702 40 cm 41.4809 ± 0.1545 4.1 Inaccuracies in feature position estimation 50 cm 50.7194 ± 0.2817 60 cm 59.9514 ± 0.4202 To observe how errors in feature position estimation affects the 70 cm 69.4644 ± 0.6870 DCR method, a condition that can occur when using a real gaze 80 cm 79.0274 ± 1.1479 tracker setup, the same simulations described earlier were repeated 90 cm 88.7901 ± 1.4968 adding perturbations to the positions of the four corneal reflections created by the screen light sources. Perturbations were uniformly Table 3: average depth estimation results and standard deviation distributed in the range of [−1, 1] pixel, and was added indepen- (in cm) for inaccurate corneal reflection detection. dently to the x and y coordinates of each corneal reflection in the 139
  • 4. 5 Experimental results distance static offset DCR - dynamic offset first subject To observe the real performance of the proposed method, a exper- 50 cm 0.95 ± 0.54 0.97 ± 0.47 iment was conducted for two subjects. Both were already familiar 60 cm 0.96 ± 0.43 0.96 ± 0.41 with gaze tracking devices and the experiment was similar to the 70 cm 0.80 ± 0.31 0.62 ± 0.30 simulations. An important difference from the simulation setup is 80 cm 1.14 ± 0.43 1.04 ± 0.56 that a fixed focus camera was used. This way, the user-camera dis- second subject tance remained constant (about 50 cm) when testing different user- 50 cm 1.33 ± 0.55 1.16 ± 0.53 screen distances. In fact it was the screen that was moved during 60 cm 0.68 ± 0.44 0.67 ± 0.43 experiments while the user and camera remained at a fixed posi- 70 cm 1.19 ± 1.07 1.12 ± 1.06 tion. A chin rest was used to keep users at a fixed position and the 80 cm 1.30 ± 1.10 1.11 ± 0.70 screen was placed at distances of 50, 60, 70 and 80 cm from the chin rest. Markers over a table were used to place the screen and Table 4: average gaze estimation error and standard deviation (in the chin rest at the specified positions. Note that the actual user- degrees) for both subjects. screen distance is not the same as the chin rest-screen as the head usually rests about 2-3 cm closer to the screen. However as head position is fixed, there will always be a 10 cm translation between visual axis of the eye with a fixed 2D offset vector applied directly each screen position. The dimension of the used screen is 34 cm × on the onscreen gaze estimates. As the length of the 2D offset vec- 27 cm. tor varies with head motion, particularly with depth, the use of a At each screen position the users looked at 49 tests points (center of fixed vector does not suffice to accurately estimate the POR in the each cell of a 7×7 grid covering the screen) and a image (640×480 presence of large depth changes. Our proposed method dynami- pixels) was captured for each of them. Calibration was performed cally adjusts the vector by a scale factor derived from relative depth for the distance of 60 cm, and the 9 calibration points were ex- change estimation. The main contribution of our method to cur- tracted from the 49 test points in the same way that was done in rent CR techniques is an 8% accuracy improvement using real user the simulations. Due to the fact that a fixed focus camera was used, data and a 40% improvement using synthetic data. Our method is keeping the user-camera distance constant, the ratio d/dref had to very simple to compute and does not require any extra hardware. A be computed differently: relevant discovery from our experimental data is that eye roll (rota- tion along the z axis), which was not considered in our simulation model, may be a significant source of error. Future work includes d sizeref further investigation for a solution that dynamically adjusts not only = (7) dref size the length of the 2D offset vector but also its orientation. Average angular estimation errors are shown in table 4. When DCR References method was used, an average reduction of 7.29% in the average er- ror was observed for the first subject, while for the second subject B EYMER , D., AND F LICKNER , M. 2003. Eye gaze tracking using the reduction was of 8.69%. Depth compensation improves accu- an active stereo head. In Proceedings of the IEEE Conference on racy of POR estimation compared to when no depth compensation Computer Vision and Pattern Recognition, vol. 2, 451–458. is performed. However, observed improvement in accuracy is not as expressive as was observed in simulations, even in the simulation C OUTINHO , F., AND M ORIMOTO , C. 2006. Free head motion eye where inaccurate feature detection was considered. gaze tracking using a single camera and multiple light sources. In Computer Graphics and Image Processing, 2006. SIBGRAPI Some reasons can be pointed for the reduced performance that was ’06. 19th Brazilian Symposium on, 171–178. observed using a real setup. Inaccuracy in feature detection is one of them as was verified in section 4.1 and may be larger than the G UESTRIN , E. D., E IZENMAN , M., K ANG , J. J., AND E IZEN - [−1, 1] interval considered for simulations. Another reason is that MAN , E. 2008. Analysis of subject-dependent point-of-gaze cornea surface of subjects may not fit the spherical model used in estimation bias in the cross-ratios method. In ETRA ’08: Pro- simulations, specially at the region closer to the limbus. There is ceedings of the 2008 symposium on Eye tracking research & also the possibility of subject dependent variations of the cornea applications, ACM, New York, NY, USA, 237–244. surface. All these reasons end up affecting the pattern of corneal re- S HIH , S., AND L IU , J. 2003. A novel approach to 3d gaze tracking flections in the images from which are extracted the scale measure using stereo cameras. IEEE Transactions on systems, man, and used to estimate depth variation. The method for depth variation cybernetics - PART B (Jan), 1–12. calculation is also subject to some error as it is an approximation that considers a condition of paraxial rays that are not meet in prac- W YSZECKI , G., AND S TILES , W. S. 1982. Color Science: Con- tice. The orientation of the screen relative camera and the eye may cepts and Methods, Quantitative Data and Formulae. John Wi- also affect this approximation. Finally, we noticed that actual orien- ley & Sons. tation of the 2D onscreen offset vector can also change for different eye rotations and positions but, despite the fact that the offset size YOO , D. H., AND C HUNG , M. J. 2005. A novel non-intrusive is adjusted dynamically, its orientation remains fixed. eye gaze estimation using cross-ratio under large head motion. Comput. Vis. Image Underst. 98, 1, 25–51. 6 Conclusion YOO , D. H., L EE , B. R., AND C HUNG , M. J. 2002. Non-contact eye gaze tracking system by mapping of corneal reflections. In This paper has presented a solution to the problem shared by cur- FGR ’02: Proceedings of the Fifth IEEE International Confer- rent cross-ratio (CR) remote eye tracking methods regarding accu- ence on Automatic Face and Gesture Recognition, IEEE Com- racy decay with changes in eye position along the optical axis of puter Society, Washington, DC, USA, 101. the camera. [Coutinho and Morimoto 2006] have presented a so- lution that compensates the angular offset between the optical and 140