SlideShare a Scribd company logo
1 of 12
Download to read offline
Eyeglasses-free Display: Towards Correcting Visual Aberrations
with Computational Light Field Displays
Fu-Chung Huang1,4
Gordon Wetzstein2
Brian A. Barsky1,3
Ramesh Raskar2
1
Computer Science Division, UC Berkeley 2
MIT Media Lab 3
School of Optometry, UC Berkeley 4
Microsoft Corporation
Conventional Display Multilayer Display
high-res, low contrast
Light Field Display - direct
low-res, high contrast
Proposed Display
high-res, high contrast
single display panel
(no processing)
[Huang et al. 2012]
(prefiltering)
[Pamplona et al. 2012]
(no processing) (prefiltering)
(Source image courtesy of flickr user dfbphotos)
Figure 1: Vision correction with computational displays. On a conventional screen, people with optical aberrations see a blurred image (cen-
ter left). Current approaches to aberration-correcting display use multilayer prefiltering (center) or light field displays (center right). While
the former technology enhances perceived image sharpness, contrast is severely reduced. Existing light field-based solutions offer high con-
trast but require a very high angular sampling density, which significantly reduces image resolution. In this paper, we explore the convergence
of light field display optics and computational prefiltering (right), which achieves high image resolution and contrast simultaneously.
Abstract
Millions of people worldwide need glasses or contact lenses to see
or read properly. We introduce a computational display technol-
ogy that predistorts the presented content for an observer, so that
the target image is perceived without the need for eyewear. By
designing optics in concert with prefiltering algorithms, the pro-
posed display architecture achieves significantly higher resolution
and contrast than prior approaches to vision-correcting image dis-
play. We demonstrate that inexpensive light field displays driven by
efficient implementations of 4D prefiltering algorithms can produce
the desired vision-corrected imagery, even for higher-order aberra-
tions that are difficult to be corrected with glasses. The proposed
computational display architecture is evaluated in simulation and
with a low-cost prototype device.
CR Categories: B.4.2 [Hardware]: Input/Output and Data
Communications—Image display; H.1.2 [Information Systems]:
User/Machine Systems—Human factors; I.3.3 [Computer Graph-
ics]: Picture/Image Generation—Display algorithms;
Keywords: computational ophthalmology, displays, light fields
Links: DL PDF WEB VIDEO DATA CODE
1 Introduction
Today, an estimated 41.6% of the US population [Vitale et al. 2009]
and more than half of the population in some Asia countries [Wong
et al. 2000] suffer from myopia. Eyeglasses have been the pri-
mary tool to correct such aberrations since the 13th century. Recent
decades have seen contact lenses and refractive surgery supplement
available options to correct for refractive errors. Unfortunately, all
of these approaches are intrusive in that the observer either has to
use eyewear or undergo surgery, which can be uncomfortable or
even dangerous.
Within the last year, two vision-correcting computational display
architectures have been introduced as non-intrusive alternatives.
Pamplona et al. [2012] proposed to use light field displays to en-
able the display to correct the observer’s visual aberrations. This
correction relies on a 2D image to be shown within the observer’s
focal range, outside the physical display enclosure. Light field
displays offering such capabilities require extremely high angular
sampling rates, which significantly reduce spatial image resolution.
As a high-resolution alternative, Huang et al. [2012] proposed a
multilayer device that relies on prefiltered image content. Unfortu-
nately, the required prefiltering techniques for these particular op-
tical configurations drastically reduce image contrast. In this pa-
per, we explore combinations of viewer-adaptive prefiltering with
off-the-shelf lenslets or parallax barriers and demonstrate that the
resulting vision-correcting computational display system facilitates
significantly higher contrast and resolution as compared to previous
solutions (see Fig. 1).
While light field displays have conventionally been used for
glasses-free 3D image presentation, correcting for visual aberra-
tions of observers is a promising new direction with direct ben-
efits for millions of people. We believe that our approach is the
first to make such displays practical by providing both high reso-
lution and contrast—the two design criteria that have been driving
the display industry for the last decade. We envision future display
systems to be integrated systems comprising flexible optical con-
blurredimage
conventionaldisplay
perceived
blurring kernel
focal range
high-res,lowcontrast
multilayerdisplay
(prefiltered)
shared pixels
with mixed colors
shared view with
mixed colors
effective pixel size
high-res,highcontrast
proposeddisplay
(prefiltered)
low-res,highcontrast
tailoreddisplay
(ray-traced)
effective pixel size
Figure 2: Illustration of vision-correcting displays. Observing a
conventional 2D display outside the focal range of the eye results
in a blurred image (top). A multilayer display with prefiltered im-
age generation (second row) allows for improved image sharpness
at the cost of reduced contrast. Image contrast can be preserved
using a light field approach via lenslet arrays on the screen (third
row); this approach severely reduces image resolution. Combin-
ing light field display and computational prefiltering, as proposed
in this paper (bottom), allows for vision-correcting image display
with significantly improved image resolution and contrast.
figurations combined with sophisticated computing that allow for
different modes, such as 2D, glasses-free 3D, or vision-correcting
image display.
We explore computational displays with applications in correcting
visual aberrations of human observers. In particular, we make the
following contributions:
• We introduce a novel vision-correcting computational display
system that leverages readily available hardware components
in concert with light field prefiltering algorithms.
• We analyze vision-correcting displays in the frequency do-
main and show that light field displays provide fundamentally
more degrees of freedom than other approaches.
• We demonstrate that light field prefiltering offers benefits over
alternative vision-correcting displays: image resolution and
contrast are significant enhanced; implementations with par-
allax barriers are brighter and lenslet-based devices have thin-
ner form factors.
• We evaluate the proposed display system using a wide range
of simulations and build a low-cost prototype device that
demonstrates correction of myopia and hyperopia in practice.
1.1 Overview of Limitations
The proposed system requires modifications to conventional dis-
play hardware and increased computational resources. Although
our displays provide significant benefits over previous work, small
tradeoffs in both resolution and contrast have to be made compared
to conventional 2D displays. We evaluate the prototype using pho-
tographs taken with aperture settings corresponding to those of the
human eye and with simulations using computational models of hu-
man perception. However, we do not run a full-fledged user study.
A commercial implementation of the proposed technology may re-
quire eye tracking, which is outside the scope of this paper. Our
academic display prototype exhibits color artifacts that are due to
moir´e between the parallax barrier and the display subpixels. These
artifacts could be removed with diffusing films tailored for the sub-
pixel structure of the screen. Finally, the employed parallax barriers
reduce image brightness.
2 Related Work
Light Fields and Computational Ophthalmology Since their
introduction to computer graphics, light fields [Levoy and Hanra-
han 1996; Gortler et al. 1996] have become one of the most fun-
damental tools in computational photography. Frequency analy-
ses [Durand et al. 2005], for instance, help better understand the
theoretical foundations of ray-based light transport whereas appli-
cations range from novel camera designs, e.g. [Levin et al. 2009],
and aberration correction in light field cameras [Ng and Hanrahan
2006], to low-cost devices that allow for diagnosis of refractive
errors [Pamplona et al. 2010] or cataracts [Pamplona et al. 2011]
in the human eye. These applications are examples of computa-
tional ophthalmology, where interactive techniques are combined
with computational photography and display for medical applica-
tions.
Light Field Displays Glasses-free 3D or light field displays were
invented in the beginning of the 20th century. The two dominating
technologies are lenslet arrays [Lippmann 1908] and parallax barri-
ers [Ives 1903]. Today, a much wider range of different 3D display
technologies are available, including volumetric displays [Cossairt
et al. 2007; Jones et al. 2007], multifocal displays [Akeley et al.
2004; Love et al. 2009], and super-multi-view displays [Takaki
2006]. Volumetric displays create the illusion of a virtual 3D ob-
ject floating inside the physical device enclosure; an observer can
accommodate within this volume. Multifocal displays allow for the
display of imagery on different focal planes but require either multi-
ple devices in a large form factor [Akeley et al. 2004] or vary-focal
glasses to be worn [Love et al. 2009]. Super-multi-view displays
emit light fields with an extremely high angular resolution, which
is achieved by employing many spatial light modulators. Most re-
cently, near-eye light field displays [Lanman and Luebke 2013] and
compressive light field displays [Lanman et al. 2010; Wetzstein
et al. 2012; Maimone et al. 2013; Hirsch et al. 2014] have been
introduced. With the exception of [Maimone et al. 2013], none of
these technologies is demonstrated to support accommodation. A
recent survey of computational displays can be found in Masia et
al. [2013].
Building light field displays that support all depth cues, including
binocular disparity, motion parallax, and accommodation, in a thin
form factor is one of the most challenging problem in display design
today. The support for accommodation allows an observer to fo-
cus on virtual images that float at a distance to the physical device.
This capability would allow for the correction of low-order visual
aberrations, such as myopia and hyperopia. Maimone et al. [2013]
demonstrate the first single-device solution for this problem that
does not require glasses; their device form-factor is—unlike ours—
not suitable for mobile displays. We propose a different strategy:
rather than aiming for the support of all depth cues with a single
device, we employ simple parallax barriers or lenslet arrays with
a very narrow field of view to only support accommodation, but
not binocular disparity or motion parallax. That means glasses-free
3D display may not be possible with the proposed devices. How-
ever, our approach allows us to use inexpensive add-ons to exist-
ing phones or tables, facilitating eyeglasses-free 2D image display
for observers with visual aberrations, including myopia, hyperopia,
astigmatism, and higher-order aberrations.
Vision-correcting Displays Devices tailored to correct visual
aberrations of human observers have recently been introduced.
Early approaches attempt to pre-sharpen a 2D image presented on a
conventional screen with the inverse point spread function (PSF) of
the observer’s eye [Alonso Jr. and Barreto 2003; Yellott and Yellott
2007; Archand et al. 2011]. Although these methods slightly im-
prove image sharpness, the problem itself is ill-posed. Fundamen-
tally, the PSF of an eye with refractive errors is usually a low-pass
filter—high image frequencies are irreversibly canceled out in the
optical path from display to the retina. To overcome this limitation,
Pamplona et al. [2012] proposed the use of 4D light field displays
with lenslet arrays or parallax barriers to correct visual aberrations.
For this application, the emitted light fields must provide a suffi-
ciently high angular resolution so that multiple light rays emitted
by a single lenslet enter the same pupil (see Fig. 2). This approach
can be interpreted as lifting the problem into a higher-dimensional
(light field) space, where the inverse problem becomes well-posed.
Unfortunately, conventional light field displays as used by Pam-
plona et al. [2012] are subject to a spatio-angular resolution trade-
off: an increased angular resolution decreases the spatial resolu-
tion. Hence, the viewer sees a sharp image but at a significantly
lower resolution than that of the screen. To mitigate this effect,
Huang et al. [2011; 2012] recently proposed to use multilayer dis-
play designs together with prefiltering. While this is a promising,
high-resolution approach, combining prefiltering and these particu-
lar optical setups significantly reduces the resulting image contrast.
Pamplona et al. [2012] explore the resolution-limits of available
hardware to build vision-correcting displays; Huang et al. [2011;
2012] show that computation can be used to overcome the reso-
lution limits, but at the cost of decreased contrast. The approach
proposed in this paper combines both methods by employing 4D
light field prefiltering with hardware designs that have previously
only been used in a “direct” way, i.e. each screen pixel corresponds
to one emitted light ray. We demonstrate that this design allows for
significantly higher resolutions as compared to the “direct” method
because angular resolution demands are decreased. At the same
time, image contrast is significantly increased, compared to previ-
ous prefiltering approaches, because of the hardware we use.
3 Light Field Transport and Inversion
In this section, we derive the optical image formation of a light
field on the observer’s retina as well as image inversion methods.
For this purpose, we employ a two-plane parameterization [Levoy
and Hanrahan 1996; Chai et al. 2000] of the light fields emitted by
the device and inside the eye. The forward and inverse models in
this section are derived for two-dimensional “flatland” light fields
with straightforward extensions to the full four-dimensional formu-
lations.
3.1 Retinal Light Field Projection
We define the lateral position on the retina to be x and that on the
pupil to be u (see Fig. 3). The light field l (x, u) describes the radi-
ance distribution inside the eye. Photoreceptors in the retina aver-
age over radiance incident from all angles; therefore, the perceived
intensity i (x) is modeled as the projection of l along its angular
dimension:
i (x) =
Ωu
l (x, u) du, (1)
where Ωu is the integration domain, which is limited by the fi-
nite pupil size. Vignetting and other angle-dependent effects are
absorbed in the light field. Assuming that the display is capable of
emitting a light field that contains spatial variation over the screen
plane xd
and angular variation over the pupil plane ud
, allows us
to model the radiance distribution entering the eye as a light field
ld
xd
, ud
. Note that the coordinates on the pupil plane for the
light fields inside the eye and on the display are equivalent (u ud
).
Refractions and aberrations in the eye are modeled as a mapping
function φ : R × R → R from the spatio-angular coordinates of
l to a location on the screen, such that xd
= φ(x, u). Equation 1
therefore becomes
i (x) =
∞
−∞
ld
(φ (x, u) , u) A (u) du. (2)
Here, the effect of the finite pupil diameter r is a multiplication
of the light field with the pupil function A (u) = rect u
r
. In the
full 4D case, the rect function is replaced by a circular function
modeling the shape of the pupil.
Following standard ray transfer matrix notation [Hecht 2001], the
mapping between rays incident on the retina and those emitted by
the screen can be modeled as the combined effect of transport be-
tween retina and pupil by distance De
, refraction of the lens with
focal length f, and transport between pupil and screen by distance
Do
. In matrix notation, this transformation is expressed as
φ(x, u)
ud =
−Do
De Do
∆
0 1
x
u
= T
x
u
(3)
where T is the concatenation of the individual propagation opera-
tors and ∆ = 1
De − 1
f
+ 1
Do . We derive Equation 3 in Supplemental
Section A. As a first-order approximation, Equation 3 only models
the defocus of the eye by considering its focal length f, which may
be constrained due to the observer’s limited accommodation range.
However, astigmatism and higher-order aberrations can be included
in this formulation (see Sec. 6.2).
Discretizing Equations 2 and 3 results in a linear forward model:
i = Pld
, (4)
where the matrix P ∈ RN×N
encodes the projection of the dis-
crete, vectorized 4D light field ld
∈ RN
emitted by the display onto
the retina i ∈ RN
. For the remainder of the paper, we assume that
the number of emitted light rays N is the same as the discretized
locations on the retina, which makes P square.
(b) conventional display
(in-focus)
(c) conventional display
(out-of-focus)
(d) multilayer display
(out-of-focus)
(e) light field display
(out-of-focus)
spatialdomainfrequencydomain
(a) optical setup
retinal projection
light field
display
Figure 3: Light field analysis for different displays. The light field emitted by a display is parameterized by its coordinates on the screen
xd
, on the pupil u, and on the retina x (a). This light field propagates through the pupil and is projected into a 2D image on the retina. For
an in-focus display, the light field incident on the retina is a horizontal line in the frequency domain (b). For a displayed image outside the
accommodation range of the observer, the corresponding light field is slanted and energy is lost at some spatial frequencies (c). Multilayer
displays utilize an additional display layer to preserve all spatial frequencies (d). With light field displays, frequency loss is also avoided; the
perceived image frequencies are a combination of all spatio-angular frequencies of the incident light field (e). The ray paths in (a) show two
effects for a hyperopic eye observing a light field display. First, each photoreceptor on the retina averages over multiple neighboring pixels
on the screen (green shaded regions). Second, each pixel on the screen (e.g., xd
0) emits different intensities toward different regions on the
pupil (u0, u1), allowing the same pixel to appear differently when observed from different locations (x0, x1) on the retina (red arrows).
3.2 Inverse Light Field Projection
The objective of an aberration-correcting display is to present a 4D
light field to the observer, such that a desired 2D retinal projection
is perceived. Assuming that viewing distance, pupil size, and other
parameters are known, the emitted light field can be found by opti-
mizing the following objective function:
minimize
{ld}
i − Pld
2
subject to 0 ≤ ld
i ≤ 1, for i = 1 . . . N
(5)
Here, i is the target image (given in normalized power per unit area)
and the constraints of the objective account for physically feasible
pixel states of the screen. Equation 5 can be solved using stan-
dard non-negative linear solvers, we employ LBFGSB [Byrd et al.
1995]. As shown in the following frequency interpretation and in
Section 4, Equation 5 is an ill-posed problem for conventional 2D
displays. The problem becomes invertible through the use of 4D
light field displays.
3.3 Frequency Domain Analysis
While Equation 5 allows for optimal display pixels states to be de-
termined, a natural question that remains is ‘Which display type is
best suited for aberration-correction?’. We attempt to answer this
question in two different ways: with a frequency analysis derived in
this section and with an analysis of the conditioning of projection
matrix P in Section 4.
Frequency analyses have become standard tools to generate an intu-
itive understanding of performance bounds of computational cam-
eras and displays (e.g., [Durand et al. 2005; Levin et al. 2009; Wet-
zstein et al. 2011]), we follow this approach. First, we note that
the coordinate transformation T between display and retina can be
used to model corresponding transformation in the frequency do-
main via the Fourier linear transformation theorem:
ωd
x
ωd
u
=
−De
Do 0
De
∆ 1
ωx
ωu
= ˆT
ωx
ωu
, (6)
where ωx, ωu are the spatial and angular frequencies of the light
field inside the eye, ωd
x, ωd
u the corresponding frequencies on the
display, and ˆT = T−T
[Ramamoorthi et al. 2007].
One of the most interesting results of the frequency analysis is the
effect of the pupil outlined in Equation 2. The multiplication with
the pupil function in the spatial domain becomes a convolution in
the frequency domain whereas the projection along the angular di-
mension becomes a slicing [Ng 2005] along ωu = 0:
ˆi(ωx) = ˆl ∗ ˆA (ωx, 0) =
Ωωu
ˆl (ωx, ωu) ˆA (ωu) dωu
=
Ωωu
ˆld
−
De
Do
ωx, De
∆ωx + ωu
ˆA (ωu) dωu. (7)
Here, ˆ· denotes the Fourier transform of a variable and ˆA (ωu) =
sinc (rωu). Note that the convolution with the sinc function accu-
mulates higher angular frequencies along ωu = 0 before the slic-
ing occurs, so those frequencies are generally preserved but are all
mixed together (see Figs. 3 b-e).
Conventional 2D Displays Equation 7 is the most general for-
mulation for the perceived spectrum of an emitted light field. The
light field that can actually be emitted by certain types of dis-
plays, however, may be very restricted. In a conventional 2D dis-
play, for instance, each pixel emits light isotropically in all direc-
tions, which makes the emitted light field constant in angle. Its
Fourier transform is therefore a Dirac in the angular frequencies
(i.e. ˆld
ωd
x, ωd
u = 0 ∀ ωd
u = 0).
Taking a closer look at Equation 7 with this restriction in mind,
allows us to disregard all non-zero angular frequencies of the dis-
played light field and focus on ωd
u = De
∆ωx + ωu = 0. As
illustrated in Figures 3 (b-c, bottom), the light field incident on the
retina is therefore a line ωu = −De
∆ωx, which we can parame-
terize by its slope s = −De
∆. Equation 7 simplifies to
ˆi2D(ωx) = ˆld
−
De
Do
ωx, 0 sinc (r sωx) . (8)
Unfortunately, sinc functions contain a lot of zero-valued posi-
tions, making the correction of visual aberrations with 2D displays
an ill-posed problem.
Correction with Multilayer Prefiltering Huang et al. [2012] pro-
posed to remedy this ill-posedness by adding an additional layer,
such as a liquid crystal display, to the device. Although stacks of
liquid crystal panels usually result in a multiplicative image forma-
tion (Wetzstein et al. [2011; 2012]), Huang et al. propose to mul-
tiplex the displayed patterns in time, which results in an additive
image formation because of perceptual averaging via persistence of
vision. As illustrated in Figure 3 (d), this changes the frequency
domain representation to the sum of two lines with different slopes.
Generalizing Equation 8 to multiple display layers results in the
following frequency representation of the retinal projection:
ˆiml(ωx) =
k
ˆl(d,k)
−
De
D(o,k)
ωx, 0 sinc rs(k)
ωx , (9)
where s(k)
is the slope of display layer k and ˆl(d,k)
is the light field
emitted by that layer. The offsets between display layers are chosen
so that the envelope of the differently sheared sinc functions con-
tains no zeros. While this is conceptually effective, physical con-
straints of the display, such as nonnegative pixel states and limited
dynamic range, result in a severe loss of contrast in practice.
Correction with Light Field Displays As opposed to 2D dis-
plays or multilayer displays, light field displays have the capability
to generate a continuous range of spatio-angular frequencies. Basi-
cally, this allows for multiple virtual 2D layers to be emitted simul-
taneously, each having a different slope ˜s (see Fig. 3 e). Following
the intuition used in Equations 8 and 9, we can write Equation 7 as
ˆilf (ωx) =
Ω˜s
ˆl (ωx, ˜sωx) ˆA (˜sωx) d˜s (10)
=
Ω˜s
ˆld
−
De
Do
ωx, De
∆ωx + ˜sωx sinc (r˜sωx) d˜s.
Although Equation 10 demonstrates that light field displays support
a wide range of frequencies, many different solutions for actually
computing them for a target image exist. Pamplona et al. [2012]
chose a naive ray-traced solution. Light field displays, however, of-
fer significantly more degrees of freedom, but these are only un-
locked by solving the full inverse light field projection problem
(Eq. 5), which we call “light field prefiltering”. We demonstrate
that this approach provides significant improvements in image res-
olution and contrast in the following sections.
4 Analysis
Whereas the previous section introduces forward and inverse image
formation and also provides an interpretation in the frequency do-
main, we analyze results and capabilities of the proposed method
in this section. First, we give an intuitive explanation for when the
problem of correcting visual aberrations is actually invertible, and
(a) 3x3 prefiltered light field
(b) no correction
(c) with correction
(e) 3 views defocus-sheared
(g) 5 views defocus-sheared
(d) 3 views
(f) 5 views
1st view 2nd view 3rd view
pupil aperture
ud
xd
ud
xd
u
x
u
x
display light field retinal light field
perceived imagedisplayed image
ill-posed
prefiltering
well-posed
prefiltering
well-posed
prefiltering
Figure 4: Light field prefiltering. The proposed prefiltering ap-
proach computes a light field (here with 3 × 3 views) that results in
a desired 2D projection on the retina of an observer. The prefiltered
light field for an example scene is shown in (a), its simulated projec-
tion on the retina in (c), and an image observed on a conventional
screen in (b). Spatio-angular frequencies of the light field are am-
plified, resulting in the desired sharpening when integrated on the
retina. Two sample “flatland” light fields with different angular
sampling rates are shown in display (d,f) and in eye (e,g) coordi-
nates. Here, the yellow boxes illustrate why 4D light field prefilter-
ing is more powerful than 2D image prefiltering: a single region on
the retina receives contributions from multiple different light field
views (e,g). Wherever that is the case, the inverse problem of light
field prefiltering is well-posed but in other regions the problem is
the same as the ill-posed problem faced with conventional 2D dis-
plays (e). (Source image courtesy of Kar Han Tan)
we follow with a formal analysis of this intuition by evaluating the
conditioning of the discrete forward model (Eq. 4). We also eval-
uate the contrast of generated imagery and analyze extensions of
lateral and axial viewing ranges for an observer.
Intuition Figure 4 (a) shows an example of a prefiltered light field
with 3 × 3 views for a sample scene. In this example, the different
views contain overlapping parts of the target image (yellow box),
allowing for increased degrees of freedom for aberration compen-
sation. Precisely these degrees of freedom are what makes the prob-
lem of correcting visual aberrations well-posed. The 4D prefiltering
does not act on a 2D image, as is the case for conventional displays,
but lifts the problem into a higher-dimensional space in which it
becomes invertible. Although the prefiltered light field (Fig. 4, a)
appears to contain amplified high frequencies in each view of the
0
2
4
12
11.5
22.5
33.5
4
0
0.25
0.5
0.75
1
1.25
6
8
10
00.5
kernelodiameter
(inoscreenopixels) angularosamplingorate
(raysoenteringotheopupil)
conditiononumbero(x104
)
Figure 5: Conditioning analysis. The light field projection matrix
corresponding to a defocused eye is ill-conditioned. With more an-
gular resolution available in the emitted light field, more degrees
of freedom are added to the system, resulting in lower condition
numbers (lower is better). The condition number of the projection
matrix is plotted for a varying defocus distance (kernel size) and
angular resolution (number of light field views). We observe that
even as few as 1.5 angular light field samples entering the pupil of
an observer decrease the condition number.
light field, the prefilter actually acts on all four dimensions simul-
taneously. When optically projected onto the retina of an observer,
all light field views are averaged, resulting in a perceived image that
has significantly improved sharpness (c) as compared to an image
observed on a conventional 2D display (b).
We illustrate this principle using an intuitive 2D light field in Fig-
ures 4 (d-g). The device emits a light field with three (d,e) and
five (f,g) views, respectively. Individual views are shown in differ-
ent colors. These are sheared in display space (d,f), because the
eye is not actually focused on the display due to the constrained
accommodation range of the observer. The finite pupil size of the
eye limits the light field entering the eye, as illustrated by the semi-
transparent white regions. Whereas we show the light fields in both
display coordinates (d,f) and eye (e,g) coordinates, the latter is more
intuitive for understanding when vision correction is possible. For
locations on the retina that receive contributions from multiple dif-
ferent views of the light field (indicated by yellow boxes in e,g),
the inverse problem is well-posed. Regions on the retina that only
receive contributions from a single light field view, however, are
optically equivalent to the conventional 2D display case, which is
ill-posed for vision correction.
Conditioning Analysis To formally verify the discussed intu-
ition, we analyze the condition number of the light field projection
matrix P (see Eqs. 4, 5). Figure 5 shows the matrix conditioning
for varying amounts of defocus and angular light field resolution
(lower condition number is better). Increasing the angular resolu-
tion of the light field passing through the observer’s pupil signifi-
cantly decreases the condition number of the projection matrix for
all amounts of defocus. This results in an interesting observation:
increasing the amount of defocus increases the condition number
but increasing the angular sampling rate does the opposite. Note
that the amount of defocus is quantified by the size of a blur kernel
on the screen (see Fig. 5).
0.3
0.5
0.7
0.9
1.1
1.3
1.5
1.7
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
20 dB
25 dB
30 dB
35 dB
40 dB
45 dB
50 dB
contrast
PSNR
angular sampling rate
(rays entering the pupil)
Target Image
Conventional Display
no processing
25 dB / 89% contrast
35 dB / 17% contrast 40 dB / 74% contrast
35 dB / 100% contrast
Conventional Display
with prefiltering
Proposed Display
with prefiltering
(a) (b) (c)
Figure 6: Tradeoff between angular light field resolution and im-
age contrast. Top: we reconstruct a test image with different com-
binations of angular resolution and image contrast and plot the
achieved PSNR. Bottom: using prefiltering with a conventional 2D
display (b), we obtain either a low-quality but high-contrast image
or a high-quality but low-contrast image. For a light field display
with 1.5 or more prefiltered views entering the pupil (c), a similar
trend is observed but overall reconstruction quality is significantly
increased. (Snellen chart courtesy of Wikipedia user Jeff Dahl)
The condition number drops significantly after it passes the 1.3
mark, where the angular sampling enables more than one light field
view to enter the pupil. This effectively allows for angular light
field variation to be exploited in the prefiltering. As more than two
light field views pass through the pupil, the condition number keeps
decreasing but at a much slower rate. With an extreme around 7 to
9 views, the system becomes the setup of Pamplona et al.: each
ray hits exactly one retinal pixel, but the spatial-angular trade-off
reduces the image resolution. Our light field prefiltering method is
located in between these two extremes of choosing either high reso-
lution or high contrast, but never both simultaneously. Usually, less
than two views are required to maintain a sufficiently low condition
number. The experiments in Figure 5 are computed with a viewing
distance of 350mm, a pupil diameter of 6mm, and a pixel pitch of
45µm. The angular sampling rate refers to the number of light field
views entering the pupil.
Image Contrast Optimization At the defocus level shown in
Figure 6 (a, bottom), naively applying the nonnegative constraint
0 10 20 30 40 50 60 70
25
27
29
31
33
35
off−axis movement in mm
imagequalityinPSNR
without optimization
off−axis optimized
without optimization off-axis optimized off-axisno movement
w/outoptimizationoff-axisoptimized
35.57db
35.24db
31.22db
34.64db
Figure 7: Compensating for a range of lateral viewpoints.
Aberration-free image display is possible when the relative posi-
tion of the eye with respect to the display is known. The green plot
evaluates image degradation for viewpoints that deviate laterally
from the sweetspot. Only slight ringing is visible and, due to peri-
odic viewing zones of the employed parallax barrier display, image
quality varies in a periodic manner (top, zoom-in). We can account
for a range of perspectives in the compensation, ensuring high im-
age quality for a wider viewing range (blue plot). The columns on
the bottom right show on-axis and off-axis views with and without
accounting for a range of lateral viewpoints in the optimization.
(Source image courtesy of Wikipedia user Lexaxis7)
in Equation 5 results in additional artifacts as shown in (b, top). Al-
ternatively, we can shift and scale the target image before solving
the system, effectively scaling the target image into the range space
of the projection matrix. Although this is a user-defined process,
observed image quality can be enhanced. In particular, Equation 5
can be modified as
minimize
{ld}
(i + b)/(1 + b) − Pld
2
subject to 0 ≤ ld
i ≤ 1, for i = 1 . . . N
(11)
where b is a user specified bias term that reduces the image contrast
to 1/(b + 1).
We plot achieved image quality measured in PSNR for all contrast
levels at various angular sampling rates in Figure 6 (top). With
a conventional display, prefiltering results in ringing artifacts (b)
because the inverse problem is ill-conditioned. Artificially reducing
the image contrast mitigates the artifacts but makes the text illegible
(b, bottom). A light field display makes the inverse problem well-
posed, allowing for high quality prefiltering (c). The pixel pitch of
the experiment shown in Figure 6 is 96µm; other parameters are the
same as in Figure 5. Please note that the contrast bias term b may
require manual tuning for each experiment.
Extending Lateral and Axial Viewing Range We envision most
future display systems that incorporate vision-correcting technolo-
gies to use eye tracking. In such devices, the projection matrix (see
Eq. 4) is dynamically updated for the perspective of the observer.
For applications in emerging near-eye displays [Lanman and Lue-
bke 2013], on the other hand, the proposed technology would not
require eye-tracking because the relative position between eye and
display is fixed. Within the context of this paper, we assume that
eye tracking is either available or the relative position between dis-
play and eye is fixed.
without
optimization
optimizedfor
extendedrange
Defocus-error = −20mm Defocus-error = 0mm Defocus-error = +20mm
PSNR = 29 dB PSNR = 30 dB PSNR = 27 dB
PSNR = 20 dB PSNR = 41 dB PSNR = 21 dB
Figure 8: Accounting for a range of viewing distances. Top row:
when considering a fixed viewing distance, defocus errors are com-
pensated at that exact distance (top center) but image quality de-
grades when the observer moves forward or back (top left and
right). The proposed method can account for a range of view-
ing distances (bottom row), which slightly degrades quality at the
sweetspot but significantly improves all other distances. (Source
image courtesy of Kar Han Tan)
Nevertheless, we evaluate image degradation for viewpoints that
are at a lateral distance from the target viewpoint in Figure 7. Such
shifts could be caused by imprecise tracking or quickly moving ob-
servers. We observe slight image degradation in the form of ring-
ing. However, even the degraded image quality is above 30 dB in
this experiment and varies in a periodic manner (Fig. 7, top: zoom-
in). This effect can be explained by the periodic viewing zones
that are created by the employed parallax barrier display; a similar
effect would occur for lenslet-based light field displays. We can
account for a range of lateral viewpoints by changing the matrix
in Equation 11 to P = [PT1 . . . PTM ]T
, where each PTi is the
projection matrix of one of M perspectives. Although this approach
slightly degrades image quality for the central sweetspot, a high im-
age quality (approx. 35 dB) is achieved for a much wider range of
viewpoints. The lateral range tested in Figure 7 is large enough to
demonstrate successful aberration-correction for binocular vision,
assuming that the inter-ocular distance is approx. 65 mm. Please
also refer to additional experiments in the supplemental video.
We also show results for a viewer moving along the optical axis
in Figure 8. Just like for lateral motion, we can account for vari-
able distances by stacking multiple light field projection matrices
into Equation 11 with incremental defocus distances. The resulting
equation system becomes over-constrained, so the solution attempts
to satisfy all viewing distances equally well. This results in slight
image degradations for the sweetspot, but significantly improves
image quality for all other viewing distances.
5 Implementation and Results
The proposed aberration-correcting display can be implemented us-
ing most light field display technologies. For the purpose of this
paper, we demonstrate the feasibility of our techniques with a par-
allax barrier display [Ives 1903], because the required hardware is
readily available and inexpensive. Please note that the proposed
displays are not limited to this particular architecture, although the
image formation (Eq. 4) has to be adjusted for any particular setup.
Hardware The prototype is shown in Figure 9. A pinhole-based
parallax barrier mask is printed with 5080 DPI on a transparency
with a Heidelberg Herkules imagesetter (www.pageworks.com). To
optimize light throughput and avoid diffraction, the pinholes have
a size of 75 microns each and are spaced 390 microns apart. This
mask is mounted at an offset of 5.4 mm in front of a conventional
Figure 9: Prototype display. We construct an aberration-
correcting display using parallax barriers. The barrier mask con-
tains a pinhole array (left) that is mounted at a slight offset in front
of an Apple iPod touch 4 screen (lower right). The display emits
a light field with a high-enough angular resolution so that at least
two views enter the pupil of a human observer. This effect is il-
lustrated on the top right: multiple Arabic numerals are emitted in
different viewing directions; the finite pupil size then creates an av-
erage of multiple different views on the retina (here simulated with
a camera).
2D screen using a clear acrylic spacer. The screen is an Apple iPod
touch 4th generation display with a pixel pitch of 78 microns (326
PPI) and a total resolution of 960 × 640 pixels.
The dimensions of our prototype allow 1.66 light field views to
enter a human pupil with a diameter of 6 mm at a distance of
25 cm. Higher-resolution panels are commercially available and
would directly improve spatial and angular resolution and also fa-
cilitate larger viewing distances.
Software The light field prefiltering algorithm is implemented in
Matlab on a PC with a 2.7GHz 2-core CPU and 8GB of RAM. The
projection matrix is precomputed in about 3 minutes with radiances
sampling the pupil at 20 rays/mm, resulting in approx. 11,300 ef-
fective rays per retinal pixel. We use the non-negative least squares
solver package LBFGSB [Byrd et al. 1995] to solve Equation 11
in about 20 seconds for each image shown on the prototype. The
projection matrix only needs to be computed once for each viewing
distance and we believe that an optimized GPU implementation of
the solver could achieve real-time framerates.
Photographs of Prototype We show a variety of results cap-
tured from our prototype display in Figure 10 (center right column).
These photographs are captured with a Canon T3i DSLR camera
equipped with a 50 mm lens at f/8. The display is placed at a dis-
tance of 25 cm to the camera. The camera is focused at 38 cm,
placing the screen 13 cm away from the focal plane. This camera
closely resembles a -6D hyperopic human eye.
Figure 10 (right column) shows the simulated results corrected with
our techniques. The results captured from the prototype (Fig. 10,
third column) closely resemble these simulations but contain mi-
nor artifacts that are due to moir´e between the barrier mask and
the display pixels. Compared to conventional 2D images shown
on the screen (Fig. 10, first column), image sharpness is signifi-
cantly improved without requiring the observer to wear glasses. We
also compare our approach to the method proposed by Pamplona et
al. [2012] for the same display resolution and spatio-angular trade-
off (Fig. 10, second column). Basically, their approach uses the
same display setup as ours but a direct solution rather than the pro-
posed prefilter. Our approach outperforms their method and allows
for significantly increased resolution.
proposed method
simulation
conventional display
photograph
[Pamplona et al.2012]
photograph
proposed method
photograph
Figure 10: Photographs of prototype display. The hyperopic cam-
era simulates a human pupil with a diameter of 6 mm at a distance
of 25 cm to the screen. Focused at 38 cm, images shown on a con-
ventional screen are blurred (first column). While previous meth-
ods theoretically facilitate increased image sharpness (second col-
umn), achievable resolution is fundamentally limited by the spatio-
angular resolution tradeoff of the required light field display. Light
field prefiltering, as proposed in this paper, allows for significantly
increased resolutions (third column). The prototype closely resem-
bles simulations (right column). (From top, source images courtesy
of dfbphotos (flickr), Vincent van Gogh, Houang Stephane (flickr),
JFXie (flickr), Jameziecakes (flickr), Paul Cezanne, Henri Matisse)
6 Evaluation
6.1 Visual Performance
We evaluate achieved quality in Figure 11. For this experiment,
we simulate a 10 inch tablet with a 300 PPI panel and the pinhole
parallax barrier with 6.5 mm offset. The tablet is held at a dis-
tance of 30 cm and viewed with a -6.75D hyperopic eye; images
are shown on the center of the display in a 10.8 cm × 10.8 cm area.
For each example, we compare our approach with the direct light
ContrastD83] QMOSD82.8
ContrastD55] QMOSD22.8
ContrastD81] QMOSD78.6
ContrastD68] QMOSD58.4
ContrastD100] QMOSD24.6
ContrastD100] QMOSD27.1
ContrastD100] QMOSD36.8
ContrastD100] QMOSD36.0
ContrastD15] QMOSD5.6
ContrastD17] QMOSD2.3
ContrastD15] QMOSD5.0
ContrastD13] QMOSD3.4
ContrastD100] QMOSD21.8
ContrastD100] QMOSD23.7
ContrastD100] QMOSD33.7
ContrastD100] QMOSD33.1
25]
50]
75]
100]
0]
TargetDImage TailoredDDisplay
[PamplonaDetDal.D2012]
ProposedDDisplayConventionalDDisplay
outDofDfocus
MultilayerDDisplay
[HuangDetDal.D2012]
detection
probability
Figure 11: Evaluation and comparison to previous work. We compare simulations of conventional and vision-correcting image display
qualitatively and quantitatively using contrast and quality-mean-opinion-square (QMOS) error metrics. A conventional out-of-focus display
always appears blurred (second column). Multilayer displays with prefiltering improve image sharpness but at a much lower contrast (third
column). Light field displays without prefiltering require high angular resolutions, hence provide a low spatial resolution (fourth column).
The proposed method combines prefiltering and light field display to optimize image contrast and sharpness (right column). The QMOS error
metric is a perceptually linear metric, predicting perceived quality for a human observer. We also plot maps that illustrate the probability of
an observer detecting the difference of a displayed image to the target image (bottom row). Our method performs best in most cases. (Source
images courtesy of Jameziecakes (flickr), Kar Han Tan, Mostaque Chowdhury (flickr), and Thomas Quine (flickr))
field approach and multilayer prefiltering. The target contrast for
prefiltering methods is manually adjusted to achieve the best PSNR
for each example.
Contrast Metric Prefiltering involves modulating the image con-
tent by enhancing weaker frequencies. Without utilizing the full
degrees of freedom in the light field sense, the results obtained us-
ing multilayer prefiltering suffer from extreme contrast loss, here
measured in Michelson contrast. This is defined as (Imax −
Imin)/(Imax +Imin), where Imax,min are the maximum and min-
imum intensity in the image, respectively. Light field predistortion
does not depend on content modifications but on resampling of the
light field, so the contrast is not sacrificed. By efficiently using all
views, the proposed light field prefiltering approach restores con-
trast by a factor of 3 to 5× higher than that of the multilayer pre-
filtering. We note that the contrast achieved with light field prefilter-
ing is not quite as good as the raytracing algorithm, which always
gives full contrast. However, when closely inspecting the image
content, the raytracing solution always results in blurred images,
which is due to insufficient spatial resolution.
Perceptual Metric To assess both contrast and sharpness, we re-
sort to HDR-VDP2 [Mantiuk et al. 2011], a perceptually-based im-
age metric. The quality mean opinion score (QMOS) gives an eval-
uation of overall perceived image quality, and in most examples
we score 2 to 3 times higher than other approaches. The images
no correction
conventional
LF display
(a) spherical (b) coma (c) trefoil
Figure 12: Correcting for higher-order aberrations. We simulate
images observers with different types of higher-order aberrations
perceive (top row) and show corresponding point spread functions
(top row, insets), which exhibit a range of different shapes. Most
of them are difficult to compensate with a conventional 2D display
(bottom row, lower left parts), although the blur kernel associated
with trefoil (lower right) is frequency preserving and therefore in-
vertible. The proposed aberration-correcting display is successful
in compensating all of these aberrations (bottom row, upper right
parts). (Source image courtesy of flickr user Jameziecakes)
in the third row are a particularly difficult example for prefiltering-
based algorithms, because performance depends on the frequency
content of the image which, in this case, does not allow prefiltering
to achieve a higher quality. Lots of high frequencies in the example
tend to reduce image contrast so that even our light field prefilter-
ing scores slightly lower. Visually, our result still looks sharp. In
the last row of Figure 11, we show a probabilistic map on whether
a human can detect per pixel differences for the fourth example.
Clearly, our result has a much lower detection rate.
Note that the reduced image sharpness of conventional displays
(Fig. 11, column 2) is due to defocus blur in the eye, whereas that of
Tailored Displays (Fig. 11, column 4) is due to the low spatial reso-
lution of the light field display. All displays in this simulation have
the same pixel count, but the microlens array used in Tailored Dis-
plays trades spatial display resolution for angular resolution. Our
solution also has to trade some spatial resolution, but due to the
prefiltering method we basically optimize this tradeoff.
6.2 Correcting Higher-Order Aberrations
Although aberrations of human eyes are usually dominated by my-
opia and hyperopia, astigmatism and higher-order aberrations may
also degrade observed image quality. Visual distortions of a per-
ceived wavefront are usually described by a series of basis func-
tions known as Zernike polynomials. These are closely related to
spherical harmonics, which are commonly used in computer graph-
ics applications. Lower-order Zernike polynomials include defocus
and astigmatism whereas higher-order terms include coma, trefoil,
spherical aberrations, and many others. The effects of any such
terms can easily be incorporated into the image inversion described
in Section 3 by modifying the projection matrix P.
Figure 12 evaluates compensation of higher-order aberrations with
the proposed approach. The top row shows the images an observer
with these aberrations perceives without correction. Just as in the
case of defocus, prefiltering for a conventional display usually fails
to achieve high image quality (bottom row, lower left image parts).
We observe ringing artifacts that are typical for solving ill-posed de-
convolution problems. The proposed aberration-correcting display,
on the other hand, successfully compensates for all types of aberra-
tions (bottom row, upper right parts). What is particularly interest-
ing to observe in this experiment is that some types of higher-order
aberration can be reasonably well compensated with a conventional
display. As seen in the right column of Figure 12 (bottom row,
lower left part), the point spread function of trefoil, for example, is
frequency preserving and therefore easy to invert. For most other
types of aberrations, however, this is not the case. Extended ex-
periments including astigmatism and additional higher-order aber-
rations can be found in the supplemental document.
7 Discussion
In summary, we present a computational display approach to cor-
recting low and high order visual aberrations of a human observer.
Instead of wearing vision-correcting glasses, the display itself pre-
distorts the presented imagery so that it appears as a desired target
image on the retina of the observer. Our display architecture em-
ploys off-the-shelf hardware components, such as printed masks or
lenslet arrays, combined with computational light field prefiltering
techniques.
We envision a wide range of possible implementations on devices
such as phones, tablets, televisions, and head-worn displays. In this
paper, we demonstrate one particular implementation using a low-
cost hardware add-on to a conventional phone. In a commercial
setting, this could be implemented using switchable liquid crystal
barriers, similar to those used by Nintendo 3DS, which would allow
the display to dynamically adapt to different viewers or viewing
conditions.
The proposed techniques assume that the precise location of the
observer’s eye w.r.t. the screen is either fixed or tracked. Robust so-
lutions to eye tracking, however, are not a contribution of this paper.
Each of the envisioned display types provides different challenges
for tracking pupils. For generality, we focus discussions on the
challenges of correcting vision. Inexpensive eye trackers are com-
mercially available today (e.g., http://theeyetribe.com)
and could be useful for larger-scale vision-correcting displays;
hand-held devices could use integrated cameras. Nevertheless, we
evaluate strategies to account for a range of viewer motion, which
could not only help decrease jittering of existing trackers but also
remove the need for tracking in some applications.
Benefits and Limitations The proposed techniques offer signif-
icantly increased resolution and contrast compared to previously-
proposed vision-correcting displays. Intuitively, light field prefilter-
ing minimizes demands on angular light field resolution, which di-
rectly results in higher spatial resolution. For device implementa-
tions with lenslet arrays, the reduced angular resolution, compared
to Pamplona et al. [2012], allows for shorter focal lengths of the em-
ployed lenslets resulting in thinner form factors and easier fabrica-
tion. For implementations with parallax barriers, pinhole spacings
are reduced allowing for increased image brightness.
We treat lenslet arrays and parallax barriers as very similar optical
elements throughout the manuscript. In practice, the image forma-
tion is slightly different and the implementation of Equation 4 is
adjusted for each case. As outlined in Section 1, the proposed sys-
tem requires increased computational resources and modifications
to conventional display hardware. Nevertheless, we demonstrate
that an inexpensive hardware attachment for existing phones is suf-
ficient to build the required device. Whereas the parallax barriers
in our prototype are relatively light inefficient, lenslet arrays could
overcome this limitation. Our current Matlab implementation does
not support interactive frame rates. Real-time GPU implementa-
tions of similar problems [Wetzstein et al. 2012], however, are a
strong indicator that interactive framerates could also be achieved
for the proposed methods.
While the proposed approach provides increased resolution and
contrast as compared to previous approaches, achieving the full tar-
get image resolution and contrast is not currently possible. We eval-
uate all system parameters and demonstrate prototype results under
conditions that realistically simulate a human pupil; however, we
do not perform a user study. Slight artifacts are visible on the pro-
totype, these are mainly due to limitations in how precisely we can
calibrate the distance between the pinhole mask and screen pixels,
which are covered by protective glass with an unknown thickness.
As artifact-free light field displays resembling the prototype setup
are widely available commercially, we believe that the observed ar-
tifacts could be removed with more engineering efforts. The pa-
rameter b in Section 4 is manually chosen, but could be incorpo-
rated into the optimization, making the problem more complex. We
leave this formulation for future research.
Future Work We show successful vision-correction for a variety
of static images and precomputed animations. In the future, we
would like to explore real-time implementations of the proposed
techniques that support interactive content. Emerging compressive
light field displays (e.g., [Wetzstein et al. 2012; Maimone et al.
2013]) are promising architectures for high-resolution display—
vision-correcting devices could directly benefit from advances in
that field. In the long run, we believe that flexible display architec-
tures will allow for multiple different modes, such as glasses-free
3D image display, vision-corrected 2D image display, and combi-
nations of vision-corrected and 3D image display. We would like to
explore such techniques.
8 Conclusion
Correcting for visual aberrations is critical for millions of people.
Today, most of us spend a significant amount of time looking at
computer screens on a daily basis. The computational display de-
signs proposed in this paper could become a transformative tech-
nology that has a profound impact on how we interact with digi-
tal devices. Suitable for integration in mobile devices, computer
monitors, and televisions, our vision-correcting displays could be-
come an integral part of a diverse range of devices. Tailoring vi-
sual content to a particular observer may very well turn out to be
the most widely used application of light field displays. Combined
with glasses-free 3D display modes, the proposed techniques fa-
cilitate a variety of novel applications and user interfaces that may
revolutionize user experiences.
Acknowledgements
This research was supported in part by the National Science Foun-
dation at the University of California, Berkeley under grant num-
ber IIS-1219241, “Individualized Inverse-Blurring and Aberration
Compensated Displays for Personalized Vision Correction with Ap-
plications for Mobile Devices”, and at MIT under grant number
IIS-1116718, “AdaCID: Adaptive Coded Imaging and Displays”.
Gordon Wetzstein was supported by an NSERC Postdoctoral Fel-
lowship. The authors wish to thank Dr. Douglas Lanman and Prof.
Austin Roorda for early feedback and discussions.
References
AKELEY, K., WATT, S. J., GIRSHICK, A. R., AND BANKS, M. S.
2004. A stereo display prototype with multiple focal distances.
ACM Trans. Graph. (SIGGRAPH) 23, 3, 804–813.
ALONSO JR., M., AND BARRETO, A. B. 2003. Pre-compensation
for high-order aberrations of the human eye using on-screen im-
age deconvolution. In IEEE Engineering in Medicine and Biol-
ogy Society, vol. 1, 556–559.
ARCHAND, P., PITE, E., GUILLEMET, H., AND TROCME, L.,
2011. Systems and methods for rendering a display to com-
pensate for a viewer’s visual impairment. International Patent
Application PCT/US2011/039993.
BYRD, R. H., LU, P., NOCEDAL, J., AND ZHU, C. 1995. A
limited memory algorithm for bound constrained optimization.
SIAM J. Sci. Comput. 16, 5 (Sept.), 1190–1208.
CHAI, J.-X., TONG, X., CHAN, S.-C., AND SHUM, H.-Y. 2000.
Plenoptic sampling. In ACM SIGGRAPH, 307–318.
COSSAIRT, O. S., NAPOLI, J., HILL, S. L., DORVAL, R. K., AND
FAVALORA, G. E. 2007. Occlusion-capable multiview volumet-
ric three-dimensional display. Applied Optics 46, 8, 1244–1250.
DURAND, F., HOLZSCHUCH, N., SOLER, C., CHAN, E., AND
SILLION, F. X. 2005. A frequency analysis of light transport.
In Proc. SIGGRAPH, 1115–1126.
GORTLER, S. J., GRZESZCZUK, R., SZELISKI, R., AND COHEN,
M. F. 1996. The lumigraph. In Proc. SIGGRAPH, SIGGRAPH
’96, 43–54.
HECHT, E. 2001. Optics (Fourth Edition). Addison-Wesley.
HIRSCH, M., WETZSTEIN, G., AND RASKAR, R. 2014. A com-
pressive light field projection system. ACM Trans. Graph. (SIG-
GRAPH) 33.
HUANG, F.-C., AND BARSKY, B. 2011. A framework for aber-
ration compensated displays. Tech. Rep. UCB/EECS-2011-162,
University of California, Berkeley, December.
HUANG, F.-C., LANMAN, D., BARSKY, B. A., AND RASKAR,
R. 2012. Correcting for optical aberrations using multilayer
displays. ACM Trans. Graph. (SIGGRAPH Asia) 31, 6, 185:1–
185:12.
IVES, F. E., 1903. Parallax stereogram and process of making
same. U.S. Patent 725,567.
JONES, A., MCDOWALL, I., YAMADA, H., BOLAS, M., AND
DEBEVEC, P. 2007. Rendering for an interactive 360◦
light
field display. ACM Trans. Graph. (SIGGRAPH) 26, 40:1–40:10.
LANMAN, D., AND LUEBKE, D. 2013. Near-eye light field dis-
plays. ACM Trans. Graph. (SIGGRAPH Asia) 32, 6, 220:1–
220:10.
LANMAN, D., HIRSCH, M., KIM, Y., AND RASKAR, R. 2010.
Content-adaptive parallax barriers: optimizing dual-layer 3D
displays using low-rank light field factorization. ACM Trans.
Graph. 29, 163:1–163:10.
LEVIN, A., HASINOFF, S. W., GREEN, P., DURAND, F., AND
FREEMAN, W. T. 2009. 4D frequency analysis of computa-
tional cameras for depth of field extension. ACM Trans. Graph.
(SIGGRAPH) 28, 3.
LEVOY, M., AND HANRAHAN, P. 1996. Light field rendering. In
Proc. SIGGRAPH, 31–42.
LIPPMANN, G. 1908. ´Epreuves r´eversibles donnant la sensation
du relief. Journal of Physics 7, 4, 821–825.
LOVE, G., HOFFMAN, D., HANDS, P., GAO, J., KIRBY, A., AND
BANKS, M. 2009. High-speed switchable lens enables the de-
velopment of a volumetric stereoscopic display. Optics Express
17, 15716–15725.
MAIMONE, A., WETZSTEIN, G., HIRSCH, M., LANMAN, D.,
RASKAR, R., AND FUCHS, H. 2013. Focus 3d: Compres-
sive accommodation display. ACM Trans. Graph. 32, 5, 153:1–
153:13.
MANTIUK, R., KIM, K. J., REMPEL, A. G., AND HEIDRICH, W.
2011. Hdr-vdp-2: a calibrated visual metric for visibility and
quality predictions in all luminance conditions. In Proc. ACM
SIGGRAPH, 40:1–40:14.
MASIA, B., WETZSTEIN, G., DIDYK, P., AND GUTIERREZ, D.
2013. A survey on computational displays: Pushing the bound-
aries of optics, computation, and perception. Computers &
Graphics 37, 8, 1012–1038.
NG, R., AND HANRAHAN, P. 2006. Digital correction of lens
aberrations in light field photography. In Proc. SPIE Interna-
tional Optical Design.
NG, R. 2005. Fourier slice photography. In ACM SIGGRAPH 2005
Papers, ACM, New York, NY, USA, SIGGRAPH ’05, 735–744.
PAMPLONA, V. F., MOHAN, A., OLIVEIRA, M. M., AND
RASKAR, R. 2010. Netra: interactive display for estimating
refractive errors and focal range. ACM Trans. Graph. (SIG-
GRAPH) 29, 77:1–77:8.
PAMPLONA, V. F., PASSOS, E. B., ZIZKA, J., OLIVEIRA, M. M.,
LAWSON, E., CLUA, E., AND RASKAR, R. 2011. Catra: inter-
active measuring and modeling of cataracts. ACM Trans. Graph.
(SIGGRAPH) 30, 4, 47:1–47:8.
PAMPLONA, V., OLIVEIRA, M., ALIAGA, D., AND RASKAR, R.
2012. Tailored displays to compensate for visual aberrations.
ACM Trans. Graph. (SIGGRAPH) 31.
RAMAMOORTHI, R., MAHAJAN, D., AND BELHUMEUR, P. 2007.
A first-order analysis of lighting, shading, and shadows. ACM
Trans. Graph. 26, 1.
TAKAKI, Y. 2006. High-Density Directional Display for Generat-
ing Natural Three-Dimensional Images. Proc. IEEE 94, 3.
VITALE, S., SPERDUTO, R. D., AND FERRIS, III, F. L. 2009. In-
creased prevalence of myopia in the United States between 1971-
1972 and 1999-2004. Arch. Ophthalmology 127, 12, 1632–1639.
WETZSTEIN, G., LANMAN, D., HEIDRICH, W., AND RASKAR,
R. 2011. Layered 3D: tomographic image synthesis for
attenuation-based light field and high dynamic range displays.
ACM Trans. Graph. (SIGGRAPH) 30, 4.
WETZSTEIN, G., LANMAN, D., HIRSCH, M., AND RASKAR, R.
2012. Tensor displays: Compressive light field synthesis using
multilayer displays with directional backlighting. ACM Trans.
Graph. (SIGGRAPH) 31.
WONG, T. Y., FOSTER, P. J., HEE, J., NG, T. P., TIELSCH, J. M.,
CHEW, S. J., JOHNSON, G. J., AND SEAH, S. K. 2000. Preva-
lence and risk factors for refractive errors in adult chinese in sin-
gapore. Invest Ophthalmol Vis Sci 41, 9, 2486–94.
YELLOTT, J. I., AND YELLOTT, J. W. 2007. Correcting spurious
resolution in defocused images. Proc. SPIE 6492.

More Related Content

What's hot

Face Recognition & Detection Using Image Processing
Face Recognition & Detection Using Image ProcessingFace Recognition & Detection Using Image Processing
Face Recognition & Detection Using Image Processingpaperpublications3
 
Project Report on High Speed Face Recognitions using DCT, RBF nueral network
Project Report on High Speed Face Recognitions using DCT, RBF nueral networkProject Report on High Speed Face Recognitions using DCT, RBF nueral network
Project Report on High Speed Face Recognitions using DCT, RBF nueral networkSagar Rai
 
Virtual Reality: Stereoscopic Imaging for Educational Institutions
Virtual Reality: Stereoscopic Imaging for Educational Institutions Virtual Reality: Stereoscopic Imaging for Educational Institutions
Virtual Reality: Stereoscopic Imaging for Educational Institutions Rodrigo Arnaut
 
Preserving Global and Local Features for Robust FaceRecognition under Various...
Preserving Global and Local Features for Robust FaceRecognition under Various...Preserving Global and Local Features for Robust FaceRecognition under Various...
Preserving Global and Local Features for Robust FaceRecognition under Various...CSCJournals
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
satellite image enhancement
satellite image enhancementsatellite image enhancement
satellite image enhancementsainiveditha2
 
A Perspective on Distributed and Collaborative Augmented Reality
A Perspective on Distributed and Collaborative Augmented RealityA Perspective on Distributed and Collaborative Augmented Reality
A Perspective on Distributed and Collaborative Augmented RealityCSCJournals
 
IRJET- Saliency based Image Co-Segmentation
IRJET- Saliency based Image Co-SegmentationIRJET- Saliency based Image Co-Segmentation
IRJET- Saliency based Image Co-SegmentationIRJET Journal
 
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...sipij
 
Shallow introduction for Deep Learning Retinal Image Analysis
Shallow introduction for Deep Learning Retinal Image AnalysisShallow introduction for Deep Learning Retinal Image Analysis
Shallow introduction for Deep Learning Retinal Image AnalysisPetteriTeikariPhD
 
Reimagining the World with Intelligent Holograms
Reimagining the World with Intelligent HologramsReimagining the World with Intelligent Holograms
Reimagining the World with Intelligent HologramsCognizant
 
A new approach on noise estimation of images
A new approach on noise estimation of imagesA new approach on noise estimation of images
A new approach on noise estimation of imageseSAT Publishing House
 

What's hot (18)

Face Recognition & Detection Using Image Processing
Face Recognition & Detection Using Image ProcessingFace Recognition & Detection Using Image Processing
Face Recognition & Detection Using Image Processing
 
Project Report on High Speed Face Recognitions using DCT, RBF nueral network
Project Report on High Speed Face Recognitions using DCT, RBF nueral networkProject Report on High Speed Face Recognitions using DCT, RBF nueral network
Project Report on High Speed Face Recognitions using DCT, RBF nueral network
 
Virtual Reality: Stereoscopic Imaging for Educational Institutions
Virtual Reality: Stereoscopic Imaging for Educational Institutions Virtual Reality: Stereoscopic Imaging for Educational Institutions
Virtual Reality: Stereoscopic Imaging for Educational Institutions
 
Preserving Global and Local Features for Robust FaceRecognition under Various...
Preserving Global and Local Features for Robust FaceRecognition under Various...Preserving Global and Local Features for Robust FaceRecognition under Various...
Preserving Global and Local Features for Robust FaceRecognition under Various...
 
Deep learning and computer vision
Deep learning and computer visionDeep learning and computer vision
Deep learning and computer vision
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
satellite image enhancement
satellite image enhancementsatellite image enhancement
satellite image enhancement
 
A Perspective on Distributed and Collaborative Augmented Reality
A Perspective on Distributed and Collaborative Augmented RealityA Perspective on Distributed and Collaborative Augmented Reality
A Perspective on Distributed and Collaborative Augmented Reality
 
Computer vision ppt
Computer vision pptComputer vision ppt
Computer vision ppt
 
IRJET- Saliency based Image Co-Segmentation
IRJET- Saliency based Image Co-SegmentationIRJET- Saliency based Image Co-Segmentation
IRJET- Saliency based Image Co-Segmentation
 
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...
TARGET DETECTION AND CLASSIFICATION PERFORMANCE ENHANCEMENT USING SUPERRESOLU...
 
P180203105108
P180203105108P180203105108
P180203105108
 
Shallow introduction for Deep Learning Retinal Image Analysis
Shallow introduction for Deep Learning Retinal Image AnalysisShallow introduction for Deep Learning Retinal Image Analysis
Shallow introduction for Deep Learning Retinal Image Analysis
 
Interactive Screen
Interactive ScreenInteractive Screen
Interactive Screen
 
Mie presentation
Mie presentationMie presentation
Mie presentation
 
Reimagining the World with Intelligent Holograms
Reimagining the World with Intelligent HologramsReimagining the World with Intelligent Holograms
Reimagining the World with Intelligent Holograms
 
A new approach on noise estimation of images
A new approach on noise estimation of imagesA new approach on noise estimation of images
A new approach on noise estimation of images
 
E1083237
E1083237E1083237
E1083237
 

Similar to Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays

vision correcting display
vision correcting displayvision correcting display
vision correcting displayDisha Tiwari
 
SCREENLESS DISPLAY.pptx
SCREENLESS DISPLAY.pptxSCREENLESS DISPLAY.pptx
SCREENLESS DISPLAY.pptxAlenJames14
 
screen less display documentation
screen less display documentationscreen less display documentation
screen less display documentationmani akuthota
 
Facebook: High Resolution Étendue Expansion for Holographic Displays
Facebook: High Resolution Étendue Expansion for Holographic DisplaysFacebook: High Resolution Étendue Expansion for Holographic Displays
Facebook: High Resolution Étendue Expansion for Holographic DisplaysAlejandro Franceschi
 
Screenless display report
Screenless display reportScreenless display report
Screenless display reportVikas Kumar
 
Foveated Rendering: An Introduction to Present and Future Research
Foveated Rendering: An Introduction to Present and Future ResearchFoveated Rendering: An Introduction to Present and Future Research
Foveated Rendering: An Introduction to Present and Future ResearchBIPUL MOHANTO [LION]
 
Paulin hansen.2011.gaze interaction from bed
Paulin hansen.2011.gaze interaction from bedPaulin hansen.2011.gaze interaction from bed
Paulin hansen.2011.gaze interaction from bedmrgazer
 
Keynote at VR in Science and Industry
Keynote at VR in Science and Industry Keynote at VR in Science and Industry
Keynote at VR in Science and Industry Christian Sandor
 
Screenless display (1).pptx
Screenless display (1).pptxScreenless display (1).pptx
Screenless display (1).pptx62SYCUnagarKetan
 
3D Television no more a fantasy
3D Television no more a fantasy3D Television no more a fantasy
3D Television no more a fantasyrathorenitin87
 
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...Editor IJCATR
 
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and OculomicsNext Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and OculomicsPetteriTeikariPhD
 
Screenless pd presentation
Screenless pd presentationScreenless pd presentation
Screenless pd presentationShalini1293
 
Screenless display
Screenless displayScreenless display
Screenless displaychnaveed
 
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET Journal
 

Similar to Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays (20)

vision correcting display
vision correcting displayvision correcting display
vision correcting display
 
SCREENLESS DISPLAY.pptx
SCREENLESS DISPLAY.pptxSCREENLESS DISPLAY.pptx
SCREENLESS DISPLAY.pptx
 
screen less display documentation
screen less display documentationscreen less display documentation
screen less display documentation
 
Facebook: High Resolution Étendue Expansion for Holographic Displays
Facebook: High Resolution Étendue Expansion for Holographic DisplaysFacebook: High Resolution Étendue Expansion for Holographic Displays
Facebook: High Resolution Étendue Expansion for Holographic Displays
 
screen less display
screen less displayscreen less display
screen less display
 
SEMINAR.pptx
SEMINAR.pptxSEMINAR.pptx
SEMINAR.pptx
 
Screenless display report
Screenless display reportScreenless display report
Screenless display report
 
Foveated Rendering: An Introduction to Present and Future Research
Foveated Rendering: An Introduction to Present and Future ResearchFoveated Rendering: An Introduction to Present and Future Research
Foveated Rendering: An Introduction to Present and Future Research
 
Paulin hansen.2011.gaze interaction from bed
Paulin hansen.2011.gaze interaction from bedPaulin hansen.2011.gaze interaction from bed
Paulin hansen.2011.gaze interaction from bed
 
Keynote at VR in Science and Industry
Keynote at VR in Science and Industry Keynote at VR in Science and Industry
Keynote at VR in Science and Industry
 
Screenless display (1).pptx
Screenless display (1).pptxScreenless display (1).pptx
Screenless display (1).pptx
 
3D Television no more a fantasy
3D Television no more a fantasy3D Television no more a fantasy
3D Television no more a fantasy
 
3d
3d3d
3d
 
3D
3D3D
3D
 
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...
Assistive System Using Eye Gaze Estimation for Amyotrophic Lateral Sclerosis ...
 
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and OculomicsNext Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
 
Screenless pd presentation
Screenless pd presentationScreenless pd presentation
Screenless pd presentation
 
Screenless display
Screenless displayScreenless display
Screenless display
 
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity MethodIRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
 
Microsoft Surface Technology
Microsoft Surface TechnologyMicrosoft Surface Technology
Microsoft Surface Technology
 

More from Dario Caliendo

Open Signal 2014 Android Fragmentation Report
Open Signal 2014 Android Fragmentation ReportOpen Signal 2014 Android Fragmentation Report
Open Signal 2014 Android Fragmentation ReportDario Caliendo
 
Siri su Mac: il brevetto di Apple
Siri su Mac: il brevetto di AppleSiri su Mac: il brevetto di Apple
Siri su Mac: il brevetto di AppleDario Caliendo
 
iOS backdoors attack points and surveillance mechanisms
iOS backdoors attack points and surveillance mechanismsiOS backdoors attack points and surveillance mechanisms
iOS backdoors attack points and surveillance mechanismsDario Caliendo
 
Samsung Sustainability reports 2014
Samsung Sustainability reports 2014Samsung Sustainability reports 2014
Samsung Sustainability reports 2014Dario Caliendo
 
Aumento Equo Compenso: ecco la tabella ufficiale con tutti in rincari di smar...
Aumento Equo Compenso: ecco la tabella ufficiale con tutti in rincari di smar...Aumento Equo Compenso: ecco la tabella ufficiale con tutti in rincari di smar...
Aumento Equo Compenso: ecco la tabella ufficiale con tutti in rincari di smar...Dario Caliendo
 
Audiweb @ Iab Seminar Mobile & Advertising
Audiweb @ Iab Seminar Mobile & AdvertisingAudiweb @ Iab Seminar Mobile & Advertising
Audiweb @ Iab Seminar Mobile & AdvertisingDario Caliendo
 
Auricolari, svelato il mistero dei nodi nei cavi
Auricolari, svelato il mistero dei nodi nei caviAuricolari, svelato il mistero dei nodi nei cavi
Auricolari, svelato il mistero dei nodi nei caviDario Caliendo
 
Le 250 nuove Emoticon che saranno introdotte a Luglio 2014
Le 250 nuove Emoticon che saranno introdotte a Luglio 2014Le 250 nuove Emoticon che saranno introdotte a Luglio 2014
Le 250 nuove Emoticon che saranno introdotte a Luglio 2014Dario Caliendo
 
Calcoli tariffe 2013-2014
Calcoli tariffe 2013-2014Calcoli tariffe 2013-2014
Calcoli tariffe 2013-2014Dario Caliendo
 
Brevetto Apple: Sistemi e metodi di controllo della fotocamera
Brevetto Apple: Sistemi e metodi di controllo della fotocameraBrevetto Apple: Sistemi e metodi di controllo della fotocamera
Brevetto Apple: Sistemi e metodi di controllo della fotocameraDario Caliendo
 
Samsung, in arrivo uno smartwatch indipendente che non richiederà uno smartphone
Samsung, in arrivo uno smartwatch indipendente che non richiederà uno smartphoneSamsung, in arrivo uno smartwatch indipendente che non richiederà uno smartphone
Samsung, in arrivo uno smartwatch indipendente che non richiederà uno smartphoneDario Caliendo
 
Cider, avviare le applicazoni iOS su Android
Cider, avviare le applicazoni iOS su AndroidCider, avviare le applicazoni iOS su Android
Cider, avviare le applicazoni iOS su AndroidDario Caliendo
 
Equo compenso: sondaggio copia privata Quorum
Equo compenso: sondaggio copia privata QuorumEquo compenso: sondaggio copia privata Quorum
Equo compenso: sondaggio copia privata QuorumDario Caliendo
 
Apple Q2 fy14 earnings financials
Apple Q2 fy14 earnings financialsApple Q2 fy14 earnings financials
Apple Q2 fy14 earnings financialsDario Caliendo
 
The future of digital trust
The future of digital trust The future of digital trust
The future of digital trust Dario Caliendo
 
Apple Patent: Sports monitoring system for headphones, earbuds and/or headsets
Apple Patent: Sports monitoring system for headphones, earbuds and/or headsetsApple Patent: Sports monitoring system for headphones, earbuds and/or headsets
Apple Patent: Sports monitoring system for headphones, earbuds and/or headsetsDario Caliendo
 
Anti social media - Racism on Twitter
Anti social media - Racism on TwitterAnti social media - Racism on Twitter
Anti social media - Racism on TwitterDario Caliendo
 
Brevetto Apple per la modifica dei messaggi dopo l'invio
Brevetto Apple per la modifica dei messaggi dopo l'invioBrevetto Apple per la modifica dei messaggi dopo l'invio
Brevetto Apple per la modifica dei messaggi dopo l'invioDario Caliendo
 

More from Dario Caliendo (20)

Open Signal 2014 Android Fragmentation Report
Open Signal 2014 Android Fragmentation ReportOpen Signal 2014 Android Fragmentation Report
Open Signal 2014 Android Fragmentation Report
 
Siri su Mac: il brevetto di Apple
Siri su Mac: il brevetto di AppleSiri su Mac: il brevetto di Apple
Siri su Mac: il brevetto di Apple
 
Ip 14-847 en
Ip 14-847 enIp 14-847 en
Ip 14-847 en
 
iOS backdoors attack points and surveillance mechanisms
iOS backdoors attack points and surveillance mechanismsiOS backdoors attack points and surveillance mechanisms
iOS backdoors attack points and surveillance mechanisms
 
Samsung Sustainability reports 2014
Samsung Sustainability reports 2014Samsung Sustainability reports 2014
Samsung Sustainability reports 2014
 
Aumento Equo Compenso: ecco la tabella ufficiale con tutti in rincari di smar...
Aumento Equo Compenso: ecco la tabella ufficiale con tutti in rincari di smar...Aumento Equo Compenso: ecco la tabella ufficiale con tutti in rincari di smar...
Aumento Equo Compenso: ecco la tabella ufficiale con tutti in rincari di smar...
 
Audiweb @ Iab Seminar Mobile & Advertising
Audiweb @ Iab Seminar Mobile & AdvertisingAudiweb @ Iab Seminar Mobile & Advertising
Audiweb @ Iab Seminar Mobile & Advertising
 
Auricolari, svelato il mistero dei nodi nei cavi
Auricolari, svelato il mistero dei nodi nei caviAuricolari, svelato il mistero dei nodi nei cavi
Auricolari, svelato il mistero dei nodi nei cavi
 
Le 250 nuove Emoticon che saranno introdotte a Luglio 2014
Le 250 nuove Emoticon che saranno introdotte a Luglio 2014Le 250 nuove Emoticon che saranno introdotte a Luglio 2014
Le 250 nuove Emoticon che saranno introdotte a Luglio 2014
 
Calcoli tariffe 2013-2014
Calcoli tariffe 2013-2014Calcoli tariffe 2013-2014
Calcoli tariffe 2013-2014
 
Internet Trends 2014
Internet Trends 2014Internet Trends 2014
Internet Trends 2014
 
Brevetto Apple: Sistemi e metodi di controllo della fotocamera
Brevetto Apple: Sistemi e metodi di controllo della fotocameraBrevetto Apple: Sistemi e metodi di controllo della fotocamera
Brevetto Apple: Sistemi e metodi di controllo della fotocamera
 
Samsung, in arrivo uno smartwatch indipendente che non richiederà uno smartphone
Samsung, in arrivo uno smartwatch indipendente che non richiederà uno smartphoneSamsung, in arrivo uno smartwatch indipendente che non richiederà uno smartphone
Samsung, in arrivo uno smartwatch indipendente che non richiederà uno smartphone
 
Cider, avviare le applicazoni iOS su Android
Cider, avviare le applicazoni iOS su AndroidCider, avviare le applicazoni iOS su Android
Cider, avviare le applicazoni iOS su Android
 
Equo compenso: sondaggio copia privata Quorum
Equo compenso: sondaggio copia privata QuorumEquo compenso: sondaggio copia privata Quorum
Equo compenso: sondaggio copia privata Quorum
 
Apple Q2 fy14 earnings financials
Apple Q2 fy14 earnings financialsApple Q2 fy14 earnings financials
Apple Q2 fy14 earnings financials
 
The future of digital trust
The future of digital trust The future of digital trust
The future of digital trust
 
Apple Patent: Sports monitoring system for headphones, earbuds and/or headsets
Apple Patent: Sports monitoring system for headphones, earbuds and/or headsetsApple Patent: Sports monitoring system for headphones, earbuds and/or headsets
Apple Patent: Sports monitoring system for headphones, earbuds and/or headsets
 
Anti social media - Racism on Twitter
Anti social media - Racism on TwitterAnti social media - Racism on Twitter
Anti social media - Racism on Twitter
 
Brevetto Apple per la modifica dei messaggi dopo l'invio
Brevetto Apple per la modifica dei messaggi dopo l'invioBrevetto Apple per la modifica dei messaggi dopo l'invio
Brevetto Apple per la modifica dei messaggi dopo l'invio
 

Recently uploaded

Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfRankYa
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 

Recently uploaded (20)

Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Search Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdfSearch Engine Optimization SEO PDF for 2024.pdf
Search Engine Optimization SEO PDF for 2024.pdf
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 

Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays

  • 1. Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays Fu-Chung Huang1,4 Gordon Wetzstein2 Brian A. Barsky1,3 Ramesh Raskar2 1 Computer Science Division, UC Berkeley 2 MIT Media Lab 3 School of Optometry, UC Berkeley 4 Microsoft Corporation Conventional Display Multilayer Display high-res, low contrast Light Field Display - direct low-res, high contrast Proposed Display high-res, high contrast single display panel (no processing) [Huang et al. 2012] (prefiltering) [Pamplona et al. 2012] (no processing) (prefiltering) (Source image courtesy of flickr user dfbphotos) Figure 1: Vision correction with computational displays. On a conventional screen, people with optical aberrations see a blurred image (cen- ter left). Current approaches to aberration-correcting display use multilayer prefiltering (center) or light field displays (center right). While the former technology enhances perceived image sharpness, contrast is severely reduced. Existing light field-based solutions offer high con- trast but require a very high angular sampling density, which significantly reduces image resolution. In this paper, we explore the convergence of light field display optics and computational prefiltering (right), which achieves high image resolution and contrast simultaneously. Abstract Millions of people worldwide need glasses or contact lenses to see or read properly. We introduce a computational display technol- ogy that predistorts the presented content for an observer, so that the target image is perceived without the need for eyewear. By designing optics in concert with prefiltering algorithms, the pro- posed display architecture achieves significantly higher resolution and contrast than prior approaches to vision-correcting image dis- play. We demonstrate that inexpensive light field displays driven by efficient implementations of 4D prefiltering algorithms can produce the desired vision-corrected imagery, even for higher-order aberra- tions that are difficult to be corrected with glasses. The proposed computational display architecture is evaluated in simulation and with a low-cost prototype device. CR Categories: B.4.2 [Hardware]: Input/Output and Data Communications—Image display; H.1.2 [Information Systems]: User/Machine Systems—Human factors; I.3.3 [Computer Graph- ics]: Picture/Image Generation—Display algorithms; Keywords: computational ophthalmology, displays, light fields Links: DL PDF WEB VIDEO DATA CODE 1 Introduction Today, an estimated 41.6% of the US population [Vitale et al. 2009] and more than half of the population in some Asia countries [Wong et al. 2000] suffer from myopia. Eyeglasses have been the pri- mary tool to correct such aberrations since the 13th century. Recent decades have seen contact lenses and refractive surgery supplement available options to correct for refractive errors. Unfortunately, all of these approaches are intrusive in that the observer either has to use eyewear or undergo surgery, which can be uncomfortable or even dangerous. Within the last year, two vision-correcting computational display architectures have been introduced as non-intrusive alternatives. Pamplona et al. [2012] proposed to use light field displays to en- able the display to correct the observer’s visual aberrations. This correction relies on a 2D image to be shown within the observer’s focal range, outside the physical display enclosure. Light field displays offering such capabilities require extremely high angular sampling rates, which significantly reduce spatial image resolution. As a high-resolution alternative, Huang et al. [2012] proposed a multilayer device that relies on prefiltered image content. Unfortu- nately, the required prefiltering techniques for these particular op- tical configurations drastically reduce image contrast. In this pa- per, we explore combinations of viewer-adaptive prefiltering with off-the-shelf lenslets or parallax barriers and demonstrate that the resulting vision-correcting computational display system facilitates significantly higher contrast and resolution as compared to previous solutions (see Fig. 1). While light field displays have conventionally been used for glasses-free 3D image presentation, correcting for visual aberra- tions of observers is a promising new direction with direct ben- efits for millions of people. We believe that our approach is the first to make such displays practical by providing both high reso- lution and contrast—the two design criteria that have been driving the display industry for the last decade. We envision future display systems to be integrated systems comprising flexible optical con-
  • 2. blurredimage conventionaldisplay perceived blurring kernel focal range high-res,lowcontrast multilayerdisplay (prefiltered) shared pixels with mixed colors shared view with mixed colors effective pixel size high-res,highcontrast proposeddisplay (prefiltered) low-res,highcontrast tailoreddisplay (ray-traced) effective pixel size Figure 2: Illustration of vision-correcting displays. Observing a conventional 2D display outside the focal range of the eye results in a blurred image (top). A multilayer display with prefiltered im- age generation (second row) allows for improved image sharpness at the cost of reduced contrast. Image contrast can be preserved using a light field approach via lenslet arrays on the screen (third row); this approach severely reduces image resolution. Combin- ing light field display and computational prefiltering, as proposed in this paper (bottom), allows for vision-correcting image display with significantly improved image resolution and contrast. figurations combined with sophisticated computing that allow for different modes, such as 2D, glasses-free 3D, or vision-correcting image display. We explore computational displays with applications in correcting visual aberrations of human observers. In particular, we make the following contributions: • We introduce a novel vision-correcting computational display system that leverages readily available hardware components in concert with light field prefiltering algorithms. • We analyze vision-correcting displays in the frequency do- main and show that light field displays provide fundamentally more degrees of freedom than other approaches. • We demonstrate that light field prefiltering offers benefits over alternative vision-correcting displays: image resolution and contrast are significant enhanced; implementations with par- allax barriers are brighter and lenslet-based devices have thin- ner form factors. • We evaluate the proposed display system using a wide range of simulations and build a low-cost prototype device that demonstrates correction of myopia and hyperopia in practice. 1.1 Overview of Limitations The proposed system requires modifications to conventional dis- play hardware and increased computational resources. Although our displays provide significant benefits over previous work, small tradeoffs in both resolution and contrast have to be made compared to conventional 2D displays. We evaluate the prototype using pho- tographs taken with aperture settings corresponding to those of the human eye and with simulations using computational models of hu- man perception. However, we do not run a full-fledged user study. A commercial implementation of the proposed technology may re- quire eye tracking, which is outside the scope of this paper. Our academic display prototype exhibits color artifacts that are due to moir´e between the parallax barrier and the display subpixels. These artifacts could be removed with diffusing films tailored for the sub- pixel structure of the screen. Finally, the employed parallax barriers reduce image brightness. 2 Related Work Light Fields and Computational Ophthalmology Since their introduction to computer graphics, light fields [Levoy and Hanra- han 1996; Gortler et al. 1996] have become one of the most fun- damental tools in computational photography. Frequency analy- ses [Durand et al. 2005], for instance, help better understand the theoretical foundations of ray-based light transport whereas appli- cations range from novel camera designs, e.g. [Levin et al. 2009], and aberration correction in light field cameras [Ng and Hanrahan 2006], to low-cost devices that allow for diagnosis of refractive errors [Pamplona et al. 2010] or cataracts [Pamplona et al. 2011] in the human eye. These applications are examples of computa- tional ophthalmology, where interactive techniques are combined with computational photography and display for medical applica- tions. Light Field Displays Glasses-free 3D or light field displays were invented in the beginning of the 20th century. The two dominating technologies are lenslet arrays [Lippmann 1908] and parallax barri- ers [Ives 1903]. Today, a much wider range of different 3D display technologies are available, including volumetric displays [Cossairt et al. 2007; Jones et al. 2007], multifocal displays [Akeley et al. 2004; Love et al. 2009], and super-multi-view displays [Takaki 2006]. Volumetric displays create the illusion of a virtual 3D ob- ject floating inside the physical device enclosure; an observer can accommodate within this volume. Multifocal displays allow for the display of imagery on different focal planes but require either multi- ple devices in a large form factor [Akeley et al. 2004] or vary-focal glasses to be worn [Love et al. 2009]. Super-multi-view displays emit light fields with an extremely high angular resolution, which is achieved by employing many spatial light modulators. Most re- cently, near-eye light field displays [Lanman and Luebke 2013] and compressive light field displays [Lanman et al. 2010; Wetzstein et al. 2012; Maimone et al. 2013; Hirsch et al. 2014] have been introduced. With the exception of [Maimone et al. 2013], none of these technologies is demonstrated to support accommodation. A recent survey of computational displays can be found in Masia et al. [2013].
  • 3. Building light field displays that support all depth cues, including binocular disparity, motion parallax, and accommodation, in a thin form factor is one of the most challenging problem in display design today. The support for accommodation allows an observer to fo- cus on virtual images that float at a distance to the physical device. This capability would allow for the correction of low-order visual aberrations, such as myopia and hyperopia. Maimone et al. [2013] demonstrate the first single-device solution for this problem that does not require glasses; their device form-factor is—unlike ours— not suitable for mobile displays. We propose a different strategy: rather than aiming for the support of all depth cues with a single device, we employ simple parallax barriers or lenslet arrays with a very narrow field of view to only support accommodation, but not binocular disparity or motion parallax. That means glasses-free 3D display may not be possible with the proposed devices. How- ever, our approach allows us to use inexpensive add-ons to exist- ing phones or tables, facilitating eyeglasses-free 2D image display for observers with visual aberrations, including myopia, hyperopia, astigmatism, and higher-order aberrations. Vision-correcting Displays Devices tailored to correct visual aberrations of human observers have recently been introduced. Early approaches attempt to pre-sharpen a 2D image presented on a conventional screen with the inverse point spread function (PSF) of the observer’s eye [Alonso Jr. and Barreto 2003; Yellott and Yellott 2007; Archand et al. 2011]. Although these methods slightly im- prove image sharpness, the problem itself is ill-posed. Fundamen- tally, the PSF of an eye with refractive errors is usually a low-pass filter—high image frequencies are irreversibly canceled out in the optical path from display to the retina. To overcome this limitation, Pamplona et al. [2012] proposed the use of 4D light field displays with lenslet arrays or parallax barriers to correct visual aberrations. For this application, the emitted light fields must provide a suffi- ciently high angular resolution so that multiple light rays emitted by a single lenslet enter the same pupil (see Fig. 2). This approach can be interpreted as lifting the problem into a higher-dimensional (light field) space, where the inverse problem becomes well-posed. Unfortunately, conventional light field displays as used by Pam- plona et al. [2012] are subject to a spatio-angular resolution trade- off: an increased angular resolution decreases the spatial resolu- tion. Hence, the viewer sees a sharp image but at a significantly lower resolution than that of the screen. To mitigate this effect, Huang et al. [2011; 2012] recently proposed to use multilayer dis- play designs together with prefiltering. While this is a promising, high-resolution approach, combining prefiltering and these particu- lar optical setups significantly reduces the resulting image contrast. Pamplona et al. [2012] explore the resolution-limits of available hardware to build vision-correcting displays; Huang et al. [2011; 2012] show that computation can be used to overcome the reso- lution limits, but at the cost of decreased contrast. The approach proposed in this paper combines both methods by employing 4D light field prefiltering with hardware designs that have previously only been used in a “direct” way, i.e. each screen pixel corresponds to one emitted light ray. We demonstrate that this design allows for significantly higher resolutions as compared to the “direct” method because angular resolution demands are decreased. At the same time, image contrast is significantly increased, compared to previ- ous prefiltering approaches, because of the hardware we use. 3 Light Field Transport and Inversion In this section, we derive the optical image formation of a light field on the observer’s retina as well as image inversion methods. For this purpose, we employ a two-plane parameterization [Levoy and Hanrahan 1996; Chai et al. 2000] of the light fields emitted by the device and inside the eye. The forward and inverse models in this section are derived for two-dimensional “flatland” light fields with straightforward extensions to the full four-dimensional formu- lations. 3.1 Retinal Light Field Projection We define the lateral position on the retina to be x and that on the pupil to be u (see Fig. 3). The light field l (x, u) describes the radi- ance distribution inside the eye. Photoreceptors in the retina aver- age over radiance incident from all angles; therefore, the perceived intensity i (x) is modeled as the projection of l along its angular dimension: i (x) = Ωu l (x, u) du, (1) where Ωu is the integration domain, which is limited by the fi- nite pupil size. Vignetting and other angle-dependent effects are absorbed in the light field. Assuming that the display is capable of emitting a light field that contains spatial variation over the screen plane xd and angular variation over the pupil plane ud , allows us to model the radiance distribution entering the eye as a light field ld xd , ud . Note that the coordinates on the pupil plane for the light fields inside the eye and on the display are equivalent (u ud ). Refractions and aberrations in the eye are modeled as a mapping function φ : R × R → R from the spatio-angular coordinates of l to a location on the screen, such that xd = φ(x, u). Equation 1 therefore becomes i (x) = ∞ −∞ ld (φ (x, u) , u) A (u) du. (2) Here, the effect of the finite pupil diameter r is a multiplication of the light field with the pupil function A (u) = rect u r . In the full 4D case, the rect function is replaced by a circular function modeling the shape of the pupil. Following standard ray transfer matrix notation [Hecht 2001], the mapping between rays incident on the retina and those emitted by the screen can be modeled as the combined effect of transport be- tween retina and pupil by distance De , refraction of the lens with focal length f, and transport between pupil and screen by distance Do . In matrix notation, this transformation is expressed as φ(x, u) ud = −Do De Do ∆ 0 1 x u = T x u (3) where T is the concatenation of the individual propagation opera- tors and ∆ = 1 De − 1 f + 1 Do . We derive Equation 3 in Supplemental Section A. As a first-order approximation, Equation 3 only models the defocus of the eye by considering its focal length f, which may be constrained due to the observer’s limited accommodation range. However, astigmatism and higher-order aberrations can be included in this formulation (see Sec. 6.2). Discretizing Equations 2 and 3 results in a linear forward model: i = Pld , (4) where the matrix P ∈ RN×N encodes the projection of the dis- crete, vectorized 4D light field ld ∈ RN emitted by the display onto the retina i ∈ RN . For the remainder of the paper, we assume that the number of emitted light rays N is the same as the discretized locations on the retina, which makes P square.
  • 4. (b) conventional display (in-focus) (c) conventional display (out-of-focus) (d) multilayer display (out-of-focus) (e) light field display (out-of-focus) spatialdomainfrequencydomain (a) optical setup retinal projection light field display Figure 3: Light field analysis for different displays. The light field emitted by a display is parameterized by its coordinates on the screen xd , on the pupil u, and on the retina x (a). This light field propagates through the pupil and is projected into a 2D image on the retina. For an in-focus display, the light field incident on the retina is a horizontal line in the frequency domain (b). For a displayed image outside the accommodation range of the observer, the corresponding light field is slanted and energy is lost at some spatial frequencies (c). Multilayer displays utilize an additional display layer to preserve all spatial frequencies (d). With light field displays, frequency loss is also avoided; the perceived image frequencies are a combination of all spatio-angular frequencies of the incident light field (e). The ray paths in (a) show two effects for a hyperopic eye observing a light field display. First, each photoreceptor on the retina averages over multiple neighboring pixels on the screen (green shaded regions). Second, each pixel on the screen (e.g., xd 0) emits different intensities toward different regions on the pupil (u0, u1), allowing the same pixel to appear differently when observed from different locations (x0, x1) on the retina (red arrows). 3.2 Inverse Light Field Projection The objective of an aberration-correcting display is to present a 4D light field to the observer, such that a desired 2D retinal projection is perceived. Assuming that viewing distance, pupil size, and other parameters are known, the emitted light field can be found by opti- mizing the following objective function: minimize {ld} i − Pld 2 subject to 0 ≤ ld i ≤ 1, for i = 1 . . . N (5) Here, i is the target image (given in normalized power per unit area) and the constraints of the objective account for physically feasible pixel states of the screen. Equation 5 can be solved using stan- dard non-negative linear solvers, we employ LBFGSB [Byrd et al. 1995]. As shown in the following frequency interpretation and in Section 4, Equation 5 is an ill-posed problem for conventional 2D displays. The problem becomes invertible through the use of 4D light field displays. 3.3 Frequency Domain Analysis While Equation 5 allows for optimal display pixels states to be de- termined, a natural question that remains is ‘Which display type is best suited for aberration-correction?’. We attempt to answer this question in two different ways: with a frequency analysis derived in this section and with an analysis of the conditioning of projection matrix P in Section 4. Frequency analyses have become standard tools to generate an intu- itive understanding of performance bounds of computational cam- eras and displays (e.g., [Durand et al. 2005; Levin et al. 2009; Wet- zstein et al. 2011]), we follow this approach. First, we note that the coordinate transformation T between display and retina can be used to model corresponding transformation in the frequency do- main via the Fourier linear transformation theorem: ωd x ωd u = −De Do 0 De ∆ 1 ωx ωu = ˆT ωx ωu , (6) where ωx, ωu are the spatial and angular frequencies of the light field inside the eye, ωd x, ωd u the corresponding frequencies on the display, and ˆT = T−T [Ramamoorthi et al. 2007]. One of the most interesting results of the frequency analysis is the effect of the pupil outlined in Equation 2. The multiplication with the pupil function in the spatial domain becomes a convolution in the frequency domain whereas the projection along the angular di- mension becomes a slicing [Ng 2005] along ωu = 0: ˆi(ωx) = ˆl ∗ ˆA (ωx, 0) = Ωωu ˆl (ωx, ωu) ˆA (ωu) dωu = Ωωu ˆld − De Do ωx, De ∆ωx + ωu ˆA (ωu) dωu. (7) Here, ˆ· denotes the Fourier transform of a variable and ˆA (ωu) = sinc (rωu). Note that the convolution with the sinc function accu- mulates higher angular frequencies along ωu = 0 before the slic- ing occurs, so those frequencies are generally preserved but are all mixed together (see Figs. 3 b-e). Conventional 2D Displays Equation 7 is the most general for- mulation for the perceived spectrum of an emitted light field. The light field that can actually be emitted by certain types of dis- plays, however, may be very restricted. In a conventional 2D dis- play, for instance, each pixel emits light isotropically in all direc- tions, which makes the emitted light field constant in angle. Its Fourier transform is therefore a Dirac in the angular frequencies (i.e. ˆld ωd x, ωd u = 0 ∀ ωd u = 0).
  • 5. Taking a closer look at Equation 7 with this restriction in mind, allows us to disregard all non-zero angular frequencies of the dis- played light field and focus on ωd u = De ∆ωx + ωu = 0. As illustrated in Figures 3 (b-c, bottom), the light field incident on the retina is therefore a line ωu = −De ∆ωx, which we can parame- terize by its slope s = −De ∆. Equation 7 simplifies to ˆi2D(ωx) = ˆld − De Do ωx, 0 sinc (r sωx) . (8) Unfortunately, sinc functions contain a lot of zero-valued posi- tions, making the correction of visual aberrations with 2D displays an ill-posed problem. Correction with Multilayer Prefiltering Huang et al. [2012] pro- posed to remedy this ill-posedness by adding an additional layer, such as a liquid crystal display, to the device. Although stacks of liquid crystal panels usually result in a multiplicative image forma- tion (Wetzstein et al. [2011; 2012]), Huang et al. propose to mul- tiplex the displayed patterns in time, which results in an additive image formation because of perceptual averaging via persistence of vision. As illustrated in Figure 3 (d), this changes the frequency domain representation to the sum of two lines with different slopes. Generalizing Equation 8 to multiple display layers results in the following frequency representation of the retinal projection: ˆiml(ωx) = k ˆl(d,k) − De D(o,k) ωx, 0 sinc rs(k) ωx , (9) where s(k) is the slope of display layer k and ˆl(d,k) is the light field emitted by that layer. The offsets between display layers are chosen so that the envelope of the differently sheared sinc functions con- tains no zeros. While this is conceptually effective, physical con- straints of the display, such as nonnegative pixel states and limited dynamic range, result in a severe loss of contrast in practice. Correction with Light Field Displays As opposed to 2D dis- plays or multilayer displays, light field displays have the capability to generate a continuous range of spatio-angular frequencies. Basi- cally, this allows for multiple virtual 2D layers to be emitted simul- taneously, each having a different slope ˜s (see Fig. 3 e). Following the intuition used in Equations 8 and 9, we can write Equation 7 as ˆilf (ωx) = Ω˜s ˆl (ωx, ˜sωx) ˆA (˜sωx) d˜s (10) = Ω˜s ˆld − De Do ωx, De ∆ωx + ˜sωx sinc (r˜sωx) d˜s. Although Equation 10 demonstrates that light field displays support a wide range of frequencies, many different solutions for actually computing them for a target image exist. Pamplona et al. [2012] chose a naive ray-traced solution. Light field displays, however, of- fer significantly more degrees of freedom, but these are only un- locked by solving the full inverse light field projection problem (Eq. 5), which we call “light field prefiltering”. We demonstrate that this approach provides significant improvements in image res- olution and contrast in the following sections. 4 Analysis Whereas the previous section introduces forward and inverse image formation and also provides an interpretation in the frequency do- main, we analyze results and capabilities of the proposed method in this section. First, we give an intuitive explanation for when the problem of correcting visual aberrations is actually invertible, and (a) 3x3 prefiltered light field (b) no correction (c) with correction (e) 3 views defocus-sheared (g) 5 views defocus-sheared (d) 3 views (f) 5 views 1st view 2nd view 3rd view pupil aperture ud xd ud xd u x u x display light field retinal light field perceived imagedisplayed image ill-posed prefiltering well-posed prefiltering well-posed prefiltering Figure 4: Light field prefiltering. The proposed prefiltering ap- proach computes a light field (here with 3 × 3 views) that results in a desired 2D projection on the retina of an observer. The prefiltered light field for an example scene is shown in (a), its simulated projec- tion on the retina in (c), and an image observed on a conventional screen in (b). Spatio-angular frequencies of the light field are am- plified, resulting in the desired sharpening when integrated on the retina. Two sample “flatland” light fields with different angular sampling rates are shown in display (d,f) and in eye (e,g) coordi- nates. Here, the yellow boxes illustrate why 4D light field prefilter- ing is more powerful than 2D image prefiltering: a single region on the retina receives contributions from multiple different light field views (e,g). Wherever that is the case, the inverse problem of light field prefiltering is well-posed but in other regions the problem is the same as the ill-posed problem faced with conventional 2D dis- plays (e). (Source image courtesy of Kar Han Tan) we follow with a formal analysis of this intuition by evaluating the conditioning of the discrete forward model (Eq. 4). We also eval- uate the contrast of generated imagery and analyze extensions of lateral and axial viewing ranges for an observer. Intuition Figure 4 (a) shows an example of a prefiltered light field with 3 × 3 views for a sample scene. In this example, the different views contain overlapping parts of the target image (yellow box), allowing for increased degrees of freedom for aberration compen- sation. Precisely these degrees of freedom are what makes the prob- lem of correcting visual aberrations well-posed. The 4D prefiltering does not act on a 2D image, as is the case for conventional displays, but lifts the problem into a higher-dimensional space in which it becomes invertible. Although the prefiltered light field (Fig. 4, a) appears to contain amplified high frequencies in each view of the
  • 6. 0 2 4 12 11.5 22.5 33.5 4 0 0.25 0.5 0.75 1 1.25 6 8 10 00.5 kernelodiameter (inoscreenopixels) angularosamplingorate (raysoenteringotheopupil) conditiononumbero(x104 ) Figure 5: Conditioning analysis. The light field projection matrix corresponding to a defocused eye is ill-conditioned. With more an- gular resolution available in the emitted light field, more degrees of freedom are added to the system, resulting in lower condition numbers (lower is better). The condition number of the projection matrix is plotted for a varying defocus distance (kernel size) and angular resolution (number of light field views). We observe that even as few as 1.5 angular light field samples entering the pupil of an observer decrease the condition number. light field, the prefilter actually acts on all four dimensions simul- taneously. When optically projected onto the retina of an observer, all light field views are averaged, resulting in a perceived image that has significantly improved sharpness (c) as compared to an image observed on a conventional 2D display (b). We illustrate this principle using an intuitive 2D light field in Fig- ures 4 (d-g). The device emits a light field with three (d,e) and five (f,g) views, respectively. Individual views are shown in differ- ent colors. These are sheared in display space (d,f), because the eye is not actually focused on the display due to the constrained accommodation range of the observer. The finite pupil size of the eye limits the light field entering the eye, as illustrated by the semi- transparent white regions. Whereas we show the light fields in both display coordinates (d,f) and eye (e,g) coordinates, the latter is more intuitive for understanding when vision correction is possible. For locations on the retina that receive contributions from multiple dif- ferent views of the light field (indicated by yellow boxes in e,g), the inverse problem is well-posed. Regions on the retina that only receive contributions from a single light field view, however, are optically equivalent to the conventional 2D display case, which is ill-posed for vision correction. Conditioning Analysis To formally verify the discussed intu- ition, we analyze the condition number of the light field projection matrix P (see Eqs. 4, 5). Figure 5 shows the matrix conditioning for varying amounts of defocus and angular light field resolution (lower condition number is better). Increasing the angular resolu- tion of the light field passing through the observer’s pupil signifi- cantly decreases the condition number of the projection matrix for all amounts of defocus. This results in an interesting observation: increasing the amount of defocus increases the condition number but increasing the angular sampling rate does the opposite. Note that the amount of defocus is quantified by the size of a blur kernel on the screen (see Fig. 5). 0.3 0.5 0.7 0.9 1.1 1.3 1.5 1.7 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 20 dB 25 dB 30 dB 35 dB 40 dB 45 dB 50 dB contrast PSNR angular sampling rate (rays entering the pupil) Target Image Conventional Display no processing 25 dB / 89% contrast 35 dB / 17% contrast 40 dB / 74% contrast 35 dB / 100% contrast Conventional Display with prefiltering Proposed Display with prefiltering (a) (b) (c) Figure 6: Tradeoff between angular light field resolution and im- age contrast. Top: we reconstruct a test image with different com- binations of angular resolution and image contrast and plot the achieved PSNR. Bottom: using prefiltering with a conventional 2D display (b), we obtain either a low-quality but high-contrast image or a high-quality but low-contrast image. For a light field display with 1.5 or more prefiltered views entering the pupil (c), a similar trend is observed but overall reconstruction quality is significantly increased. (Snellen chart courtesy of Wikipedia user Jeff Dahl) The condition number drops significantly after it passes the 1.3 mark, where the angular sampling enables more than one light field view to enter the pupil. This effectively allows for angular light field variation to be exploited in the prefiltering. As more than two light field views pass through the pupil, the condition number keeps decreasing but at a much slower rate. With an extreme around 7 to 9 views, the system becomes the setup of Pamplona et al.: each ray hits exactly one retinal pixel, but the spatial-angular trade-off reduces the image resolution. Our light field prefiltering method is located in between these two extremes of choosing either high reso- lution or high contrast, but never both simultaneously. Usually, less than two views are required to maintain a sufficiently low condition number. The experiments in Figure 5 are computed with a viewing distance of 350mm, a pupil diameter of 6mm, and a pixel pitch of 45µm. The angular sampling rate refers to the number of light field views entering the pupil. Image Contrast Optimization At the defocus level shown in Figure 6 (a, bottom), naively applying the nonnegative constraint
  • 7. 0 10 20 30 40 50 60 70 25 27 29 31 33 35 off−axis movement in mm imagequalityinPSNR without optimization off−axis optimized without optimization off-axis optimized off-axisno movement w/outoptimizationoff-axisoptimized 35.57db 35.24db 31.22db 34.64db Figure 7: Compensating for a range of lateral viewpoints. Aberration-free image display is possible when the relative posi- tion of the eye with respect to the display is known. The green plot evaluates image degradation for viewpoints that deviate laterally from the sweetspot. Only slight ringing is visible and, due to peri- odic viewing zones of the employed parallax barrier display, image quality varies in a periodic manner (top, zoom-in). We can account for a range of perspectives in the compensation, ensuring high im- age quality for a wider viewing range (blue plot). The columns on the bottom right show on-axis and off-axis views with and without accounting for a range of lateral viewpoints in the optimization. (Source image courtesy of Wikipedia user Lexaxis7) in Equation 5 results in additional artifacts as shown in (b, top). Al- ternatively, we can shift and scale the target image before solving the system, effectively scaling the target image into the range space of the projection matrix. Although this is a user-defined process, observed image quality can be enhanced. In particular, Equation 5 can be modified as minimize {ld} (i + b)/(1 + b) − Pld 2 subject to 0 ≤ ld i ≤ 1, for i = 1 . . . N (11) where b is a user specified bias term that reduces the image contrast to 1/(b + 1). We plot achieved image quality measured in PSNR for all contrast levels at various angular sampling rates in Figure 6 (top). With a conventional display, prefiltering results in ringing artifacts (b) because the inverse problem is ill-conditioned. Artificially reducing the image contrast mitigates the artifacts but makes the text illegible (b, bottom). A light field display makes the inverse problem well- posed, allowing for high quality prefiltering (c). The pixel pitch of the experiment shown in Figure 6 is 96µm; other parameters are the same as in Figure 5. Please note that the contrast bias term b may require manual tuning for each experiment. Extending Lateral and Axial Viewing Range We envision most future display systems that incorporate vision-correcting technolo- gies to use eye tracking. In such devices, the projection matrix (see Eq. 4) is dynamically updated for the perspective of the observer. For applications in emerging near-eye displays [Lanman and Lue- bke 2013], on the other hand, the proposed technology would not require eye-tracking because the relative position between eye and display is fixed. Within the context of this paper, we assume that eye tracking is either available or the relative position between dis- play and eye is fixed. without optimization optimizedfor extendedrange Defocus-error = −20mm Defocus-error = 0mm Defocus-error = +20mm PSNR = 29 dB PSNR = 30 dB PSNR = 27 dB PSNR = 20 dB PSNR = 41 dB PSNR = 21 dB Figure 8: Accounting for a range of viewing distances. Top row: when considering a fixed viewing distance, defocus errors are com- pensated at that exact distance (top center) but image quality de- grades when the observer moves forward or back (top left and right). The proposed method can account for a range of view- ing distances (bottom row), which slightly degrades quality at the sweetspot but significantly improves all other distances. (Source image courtesy of Kar Han Tan) Nevertheless, we evaluate image degradation for viewpoints that are at a lateral distance from the target viewpoint in Figure 7. Such shifts could be caused by imprecise tracking or quickly moving ob- servers. We observe slight image degradation in the form of ring- ing. However, even the degraded image quality is above 30 dB in this experiment and varies in a periodic manner (Fig. 7, top: zoom- in). This effect can be explained by the periodic viewing zones that are created by the employed parallax barrier display; a similar effect would occur for lenslet-based light field displays. We can account for a range of lateral viewpoints by changing the matrix in Equation 11 to P = [PT1 . . . PTM ]T , where each PTi is the projection matrix of one of M perspectives. Although this approach slightly degrades image quality for the central sweetspot, a high im- age quality (approx. 35 dB) is achieved for a much wider range of viewpoints. The lateral range tested in Figure 7 is large enough to demonstrate successful aberration-correction for binocular vision, assuming that the inter-ocular distance is approx. 65 mm. Please also refer to additional experiments in the supplemental video. We also show results for a viewer moving along the optical axis in Figure 8. Just like for lateral motion, we can account for vari- able distances by stacking multiple light field projection matrices into Equation 11 with incremental defocus distances. The resulting equation system becomes over-constrained, so the solution attempts to satisfy all viewing distances equally well. This results in slight image degradations for the sweetspot, but significantly improves image quality for all other viewing distances. 5 Implementation and Results The proposed aberration-correcting display can be implemented us- ing most light field display technologies. For the purpose of this paper, we demonstrate the feasibility of our techniques with a par- allax barrier display [Ives 1903], because the required hardware is readily available and inexpensive. Please note that the proposed displays are not limited to this particular architecture, although the image formation (Eq. 4) has to be adjusted for any particular setup. Hardware The prototype is shown in Figure 9. A pinhole-based parallax barrier mask is printed with 5080 DPI on a transparency with a Heidelberg Herkules imagesetter (www.pageworks.com). To optimize light throughput and avoid diffraction, the pinholes have a size of 75 microns each and are spaced 390 microns apart. This mask is mounted at an offset of 5.4 mm in front of a conventional
  • 8. Figure 9: Prototype display. We construct an aberration- correcting display using parallax barriers. The barrier mask con- tains a pinhole array (left) that is mounted at a slight offset in front of an Apple iPod touch 4 screen (lower right). The display emits a light field with a high-enough angular resolution so that at least two views enter the pupil of a human observer. This effect is il- lustrated on the top right: multiple Arabic numerals are emitted in different viewing directions; the finite pupil size then creates an av- erage of multiple different views on the retina (here simulated with a camera). 2D screen using a clear acrylic spacer. The screen is an Apple iPod touch 4th generation display with a pixel pitch of 78 microns (326 PPI) and a total resolution of 960 × 640 pixels. The dimensions of our prototype allow 1.66 light field views to enter a human pupil with a diameter of 6 mm at a distance of 25 cm. Higher-resolution panels are commercially available and would directly improve spatial and angular resolution and also fa- cilitate larger viewing distances. Software The light field prefiltering algorithm is implemented in Matlab on a PC with a 2.7GHz 2-core CPU and 8GB of RAM. The projection matrix is precomputed in about 3 minutes with radiances sampling the pupil at 20 rays/mm, resulting in approx. 11,300 ef- fective rays per retinal pixel. We use the non-negative least squares solver package LBFGSB [Byrd et al. 1995] to solve Equation 11 in about 20 seconds for each image shown on the prototype. The projection matrix only needs to be computed once for each viewing distance and we believe that an optimized GPU implementation of the solver could achieve real-time framerates. Photographs of Prototype We show a variety of results cap- tured from our prototype display in Figure 10 (center right column). These photographs are captured with a Canon T3i DSLR camera equipped with a 50 mm lens at f/8. The display is placed at a dis- tance of 25 cm to the camera. The camera is focused at 38 cm, placing the screen 13 cm away from the focal plane. This camera closely resembles a -6D hyperopic human eye. Figure 10 (right column) shows the simulated results corrected with our techniques. The results captured from the prototype (Fig. 10, third column) closely resemble these simulations but contain mi- nor artifacts that are due to moir´e between the barrier mask and the display pixels. Compared to conventional 2D images shown on the screen (Fig. 10, first column), image sharpness is signifi- cantly improved without requiring the observer to wear glasses. We also compare our approach to the method proposed by Pamplona et al. [2012] for the same display resolution and spatio-angular trade- off (Fig. 10, second column). Basically, their approach uses the same display setup as ours but a direct solution rather than the pro- posed prefilter. Our approach outperforms their method and allows for significantly increased resolution. proposed method simulation conventional display photograph [Pamplona et al.2012] photograph proposed method photograph Figure 10: Photographs of prototype display. The hyperopic cam- era simulates a human pupil with a diameter of 6 mm at a distance of 25 cm to the screen. Focused at 38 cm, images shown on a con- ventional screen are blurred (first column). While previous meth- ods theoretically facilitate increased image sharpness (second col- umn), achievable resolution is fundamentally limited by the spatio- angular resolution tradeoff of the required light field display. Light field prefiltering, as proposed in this paper, allows for significantly increased resolutions (third column). The prototype closely resem- bles simulations (right column). (From top, source images courtesy of dfbphotos (flickr), Vincent van Gogh, Houang Stephane (flickr), JFXie (flickr), Jameziecakes (flickr), Paul Cezanne, Henri Matisse) 6 Evaluation 6.1 Visual Performance We evaluate achieved quality in Figure 11. For this experiment, we simulate a 10 inch tablet with a 300 PPI panel and the pinhole parallax barrier with 6.5 mm offset. The tablet is held at a dis- tance of 30 cm and viewed with a -6.75D hyperopic eye; images are shown on the center of the display in a 10.8 cm × 10.8 cm area. For each example, we compare our approach with the direct light
  • 9. ContrastD83] QMOSD82.8 ContrastD55] QMOSD22.8 ContrastD81] QMOSD78.6 ContrastD68] QMOSD58.4 ContrastD100] QMOSD24.6 ContrastD100] QMOSD27.1 ContrastD100] QMOSD36.8 ContrastD100] QMOSD36.0 ContrastD15] QMOSD5.6 ContrastD17] QMOSD2.3 ContrastD15] QMOSD5.0 ContrastD13] QMOSD3.4 ContrastD100] QMOSD21.8 ContrastD100] QMOSD23.7 ContrastD100] QMOSD33.7 ContrastD100] QMOSD33.1 25] 50] 75] 100] 0] TargetDImage TailoredDDisplay [PamplonaDetDal.D2012] ProposedDDisplayConventionalDDisplay outDofDfocus MultilayerDDisplay [HuangDetDal.D2012] detection probability Figure 11: Evaluation and comparison to previous work. We compare simulations of conventional and vision-correcting image display qualitatively and quantitatively using contrast and quality-mean-opinion-square (QMOS) error metrics. A conventional out-of-focus display always appears blurred (second column). Multilayer displays with prefiltering improve image sharpness but at a much lower contrast (third column). Light field displays without prefiltering require high angular resolutions, hence provide a low spatial resolution (fourth column). The proposed method combines prefiltering and light field display to optimize image contrast and sharpness (right column). The QMOS error metric is a perceptually linear metric, predicting perceived quality for a human observer. We also plot maps that illustrate the probability of an observer detecting the difference of a displayed image to the target image (bottom row). Our method performs best in most cases. (Source images courtesy of Jameziecakes (flickr), Kar Han Tan, Mostaque Chowdhury (flickr), and Thomas Quine (flickr)) field approach and multilayer prefiltering. The target contrast for prefiltering methods is manually adjusted to achieve the best PSNR for each example. Contrast Metric Prefiltering involves modulating the image con- tent by enhancing weaker frequencies. Without utilizing the full degrees of freedom in the light field sense, the results obtained us- ing multilayer prefiltering suffer from extreme contrast loss, here measured in Michelson contrast. This is defined as (Imax − Imin)/(Imax +Imin), where Imax,min are the maximum and min- imum intensity in the image, respectively. Light field predistortion does not depend on content modifications but on resampling of the light field, so the contrast is not sacrificed. By efficiently using all views, the proposed light field prefiltering approach restores con- trast by a factor of 3 to 5× higher than that of the multilayer pre- filtering. We note that the contrast achieved with light field prefilter- ing is not quite as good as the raytracing algorithm, which always gives full contrast. However, when closely inspecting the image content, the raytracing solution always results in blurred images, which is due to insufficient spatial resolution. Perceptual Metric To assess both contrast and sharpness, we re- sort to HDR-VDP2 [Mantiuk et al. 2011], a perceptually-based im- age metric. The quality mean opinion score (QMOS) gives an eval- uation of overall perceived image quality, and in most examples we score 2 to 3 times higher than other approaches. The images
  • 10. no correction conventional LF display (a) spherical (b) coma (c) trefoil Figure 12: Correcting for higher-order aberrations. We simulate images observers with different types of higher-order aberrations perceive (top row) and show corresponding point spread functions (top row, insets), which exhibit a range of different shapes. Most of them are difficult to compensate with a conventional 2D display (bottom row, lower left parts), although the blur kernel associated with trefoil (lower right) is frequency preserving and therefore in- vertible. The proposed aberration-correcting display is successful in compensating all of these aberrations (bottom row, upper right parts). (Source image courtesy of flickr user Jameziecakes) in the third row are a particularly difficult example for prefiltering- based algorithms, because performance depends on the frequency content of the image which, in this case, does not allow prefiltering to achieve a higher quality. Lots of high frequencies in the example tend to reduce image contrast so that even our light field prefilter- ing scores slightly lower. Visually, our result still looks sharp. In the last row of Figure 11, we show a probabilistic map on whether a human can detect per pixel differences for the fourth example. Clearly, our result has a much lower detection rate. Note that the reduced image sharpness of conventional displays (Fig. 11, column 2) is due to defocus blur in the eye, whereas that of Tailored Displays (Fig. 11, column 4) is due to the low spatial reso- lution of the light field display. All displays in this simulation have the same pixel count, but the microlens array used in Tailored Dis- plays trades spatial display resolution for angular resolution. Our solution also has to trade some spatial resolution, but due to the prefiltering method we basically optimize this tradeoff. 6.2 Correcting Higher-Order Aberrations Although aberrations of human eyes are usually dominated by my- opia and hyperopia, astigmatism and higher-order aberrations may also degrade observed image quality. Visual distortions of a per- ceived wavefront are usually described by a series of basis func- tions known as Zernike polynomials. These are closely related to spherical harmonics, which are commonly used in computer graph- ics applications. Lower-order Zernike polynomials include defocus and astigmatism whereas higher-order terms include coma, trefoil, spherical aberrations, and many others. The effects of any such terms can easily be incorporated into the image inversion described in Section 3 by modifying the projection matrix P. Figure 12 evaluates compensation of higher-order aberrations with the proposed approach. The top row shows the images an observer with these aberrations perceives without correction. Just as in the case of defocus, prefiltering for a conventional display usually fails to achieve high image quality (bottom row, lower left image parts). We observe ringing artifacts that are typical for solving ill-posed de- convolution problems. The proposed aberration-correcting display, on the other hand, successfully compensates for all types of aberra- tions (bottom row, upper right parts). What is particularly interest- ing to observe in this experiment is that some types of higher-order aberration can be reasonably well compensated with a conventional display. As seen in the right column of Figure 12 (bottom row, lower left part), the point spread function of trefoil, for example, is frequency preserving and therefore easy to invert. For most other types of aberrations, however, this is not the case. Extended ex- periments including astigmatism and additional higher-order aber- rations can be found in the supplemental document. 7 Discussion In summary, we present a computational display approach to cor- recting low and high order visual aberrations of a human observer. Instead of wearing vision-correcting glasses, the display itself pre- distorts the presented imagery so that it appears as a desired target image on the retina of the observer. Our display architecture em- ploys off-the-shelf hardware components, such as printed masks or lenslet arrays, combined with computational light field prefiltering techniques. We envision a wide range of possible implementations on devices such as phones, tablets, televisions, and head-worn displays. In this paper, we demonstrate one particular implementation using a low- cost hardware add-on to a conventional phone. In a commercial setting, this could be implemented using switchable liquid crystal barriers, similar to those used by Nintendo 3DS, which would allow the display to dynamically adapt to different viewers or viewing conditions. The proposed techniques assume that the precise location of the observer’s eye w.r.t. the screen is either fixed or tracked. Robust so- lutions to eye tracking, however, are not a contribution of this paper. Each of the envisioned display types provides different challenges for tracking pupils. For generality, we focus discussions on the challenges of correcting vision. Inexpensive eye trackers are com- mercially available today (e.g., http://theeyetribe.com) and could be useful for larger-scale vision-correcting displays; hand-held devices could use integrated cameras. Nevertheless, we evaluate strategies to account for a range of viewer motion, which could not only help decrease jittering of existing trackers but also remove the need for tracking in some applications. Benefits and Limitations The proposed techniques offer signif- icantly increased resolution and contrast compared to previously- proposed vision-correcting displays. Intuitively, light field prefilter- ing minimizes demands on angular light field resolution, which di- rectly results in higher spatial resolution. For device implementa- tions with lenslet arrays, the reduced angular resolution, compared to Pamplona et al. [2012], allows for shorter focal lengths of the em- ployed lenslets resulting in thinner form factors and easier fabrica- tion. For implementations with parallax barriers, pinhole spacings are reduced allowing for increased image brightness. We treat lenslet arrays and parallax barriers as very similar optical elements throughout the manuscript. In practice, the image forma- tion is slightly different and the implementation of Equation 4 is adjusted for each case. As outlined in Section 1, the proposed sys- tem requires increased computational resources and modifications to conventional display hardware. Nevertheless, we demonstrate that an inexpensive hardware attachment for existing phones is suf- ficient to build the required device. Whereas the parallax barriers in our prototype are relatively light inefficient, lenslet arrays could overcome this limitation. Our current Matlab implementation does not support interactive frame rates. Real-time GPU implementa- tions of similar problems [Wetzstein et al. 2012], however, are a strong indicator that interactive framerates could also be achieved for the proposed methods.
  • 11. While the proposed approach provides increased resolution and contrast as compared to previous approaches, achieving the full tar- get image resolution and contrast is not currently possible. We eval- uate all system parameters and demonstrate prototype results under conditions that realistically simulate a human pupil; however, we do not perform a user study. Slight artifacts are visible on the pro- totype, these are mainly due to limitations in how precisely we can calibrate the distance between the pinhole mask and screen pixels, which are covered by protective glass with an unknown thickness. As artifact-free light field displays resembling the prototype setup are widely available commercially, we believe that the observed ar- tifacts could be removed with more engineering efforts. The pa- rameter b in Section 4 is manually chosen, but could be incorpo- rated into the optimization, making the problem more complex. We leave this formulation for future research. Future Work We show successful vision-correction for a variety of static images and precomputed animations. In the future, we would like to explore real-time implementations of the proposed techniques that support interactive content. Emerging compressive light field displays (e.g., [Wetzstein et al. 2012; Maimone et al. 2013]) are promising architectures for high-resolution display— vision-correcting devices could directly benefit from advances in that field. In the long run, we believe that flexible display architec- tures will allow for multiple different modes, such as glasses-free 3D image display, vision-corrected 2D image display, and combi- nations of vision-corrected and 3D image display. We would like to explore such techniques. 8 Conclusion Correcting for visual aberrations is critical for millions of people. Today, most of us spend a significant amount of time looking at computer screens on a daily basis. The computational display de- signs proposed in this paper could become a transformative tech- nology that has a profound impact on how we interact with digi- tal devices. Suitable for integration in mobile devices, computer monitors, and televisions, our vision-correcting displays could be- come an integral part of a diverse range of devices. Tailoring vi- sual content to a particular observer may very well turn out to be the most widely used application of light field displays. Combined with glasses-free 3D display modes, the proposed techniques fa- cilitate a variety of novel applications and user interfaces that may revolutionize user experiences. Acknowledgements This research was supported in part by the National Science Foun- dation at the University of California, Berkeley under grant num- ber IIS-1219241, “Individualized Inverse-Blurring and Aberration Compensated Displays for Personalized Vision Correction with Ap- plications for Mobile Devices”, and at MIT under grant number IIS-1116718, “AdaCID: Adaptive Coded Imaging and Displays”. Gordon Wetzstein was supported by an NSERC Postdoctoral Fel- lowship. The authors wish to thank Dr. Douglas Lanman and Prof. Austin Roorda for early feedback and discussions. References AKELEY, K., WATT, S. J., GIRSHICK, A. R., AND BANKS, M. S. 2004. A stereo display prototype with multiple focal distances. ACM Trans. Graph. (SIGGRAPH) 23, 3, 804–813. ALONSO JR., M., AND BARRETO, A. B. 2003. Pre-compensation for high-order aberrations of the human eye using on-screen im- age deconvolution. In IEEE Engineering in Medicine and Biol- ogy Society, vol. 1, 556–559. ARCHAND, P., PITE, E., GUILLEMET, H., AND TROCME, L., 2011. Systems and methods for rendering a display to com- pensate for a viewer’s visual impairment. International Patent Application PCT/US2011/039993. BYRD, R. H., LU, P., NOCEDAL, J., AND ZHU, C. 1995. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput. 16, 5 (Sept.), 1190–1208. CHAI, J.-X., TONG, X., CHAN, S.-C., AND SHUM, H.-Y. 2000. Plenoptic sampling. In ACM SIGGRAPH, 307–318. COSSAIRT, O. S., NAPOLI, J., HILL, S. L., DORVAL, R. K., AND FAVALORA, G. E. 2007. Occlusion-capable multiview volumet- ric three-dimensional display. Applied Optics 46, 8, 1244–1250. DURAND, F., HOLZSCHUCH, N., SOLER, C., CHAN, E., AND SILLION, F. X. 2005. A frequency analysis of light transport. In Proc. SIGGRAPH, 1115–1126. GORTLER, S. J., GRZESZCZUK, R., SZELISKI, R., AND COHEN, M. F. 1996. The lumigraph. In Proc. SIGGRAPH, SIGGRAPH ’96, 43–54. HECHT, E. 2001. Optics (Fourth Edition). Addison-Wesley. HIRSCH, M., WETZSTEIN, G., AND RASKAR, R. 2014. A com- pressive light field projection system. ACM Trans. Graph. (SIG- GRAPH) 33. HUANG, F.-C., AND BARSKY, B. 2011. A framework for aber- ration compensated displays. Tech. Rep. UCB/EECS-2011-162, University of California, Berkeley, December. HUANG, F.-C., LANMAN, D., BARSKY, B. A., AND RASKAR, R. 2012. Correcting for optical aberrations using multilayer displays. ACM Trans. Graph. (SIGGRAPH Asia) 31, 6, 185:1– 185:12. IVES, F. E., 1903. Parallax stereogram and process of making same. U.S. Patent 725,567. JONES, A., MCDOWALL, I., YAMADA, H., BOLAS, M., AND DEBEVEC, P. 2007. Rendering for an interactive 360◦ light field display. ACM Trans. Graph. (SIGGRAPH) 26, 40:1–40:10. LANMAN, D., AND LUEBKE, D. 2013. Near-eye light field dis- plays. ACM Trans. Graph. (SIGGRAPH Asia) 32, 6, 220:1– 220:10. LANMAN, D., HIRSCH, M., KIM, Y., AND RASKAR, R. 2010. Content-adaptive parallax barriers: optimizing dual-layer 3D displays using low-rank light field factorization. ACM Trans. Graph. 29, 163:1–163:10. LEVIN, A., HASINOFF, S. W., GREEN, P., DURAND, F., AND FREEMAN, W. T. 2009. 4D frequency analysis of computa- tional cameras for depth of field extension. ACM Trans. Graph. (SIGGRAPH) 28, 3. LEVOY, M., AND HANRAHAN, P. 1996. Light field rendering. In Proc. SIGGRAPH, 31–42. LIPPMANN, G. 1908. ´Epreuves r´eversibles donnant la sensation du relief. Journal of Physics 7, 4, 821–825. LOVE, G., HOFFMAN, D., HANDS, P., GAO, J., KIRBY, A., AND BANKS, M. 2009. High-speed switchable lens enables the de- velopment of a volumetric stereoscopic display. Optics Express 17, 15716–15725.
  • 12. MAIMONE, A., WETZSTEIN, G., HIRSCH, M., LANMAN, D., RASKAR, R., AND FUCHS, H. 2013. Focus 3d: Compres- sive accommodation display. ACM Trans. Graph. 32, 5, 153:1– 153:13. MANTIUK, R., KIM, K. J., REMPEL, A. G., AND HEIDRICH, W. 2011. Hdr-vdp-2: a calibrated visual metric for visibility and quality predictions in all luminance conditions. In Proc. ACM SIGGRAPH, 40:1–40:14. MASIA, B., WETZSTEIN, G., DIDYK, P., AND GUTIERREZ, D. 2013. A survey on computational displays: Pushing the bound- aries of optics, computation, and perception. Computers & Graphics 37, 8, 1012–1038. NG, R., AND HANRAHAN, P. 2006. Digital correction of lens aberrations in light field photography. In Proc. SPIE Interna- tional Optical Design. NG, R. 2005. Fourier slice photography. In ACM SIGGRAPH 2005 Papers, ACM, New York, NY, USA, SIGGRAPH ’05, 735–744. PAMPLONA, V. F., MOHAN, A., OLIVEIRA, M. M., AND RASKAR, R. 2010. Netra: interactive display for estimating refractive errors and focal range. ACM Trans. Graph. (SIG- GRAPH) 29, 77:1–77:8. PAMPLONA, V. F., PASSOS, E. B., ZIZKA, J., OLIVEIRA, M. M., LAWSON, E., CLUA, E., AND RASKAR, R. 2011. Catra: inter- active measuring and modeling of cataracts. ACM Trans. Graph. (SIGGRAPH) 30, 4, 47:1–47:8. PAMPLONA, V., OLIVEIRA, M., ALIAGA, D., AND RASKAR, R. 2012. Tailored displays to compensate for visual aberrations. ACM Trans. Graph. (SIGGRAPH) 31. RAMAMOORTHI, R., MAHAJAN, D., AND BELHUMEUR, P. 2007. A first-order analysis of lighting, shading, and shadows. ACM Trans. Graph. 26, 1. TAKAKI, Y. 2006. High-Density Directional Display for Generat- ing Natural Three-Dimensional Images. Proc. IEEE 94, 3. VITALE, S., SPERDUTO, R. D., AND FERRIS, III, F. L. 2009. In- creased prevalence of myopia in the United States between 1971- 1972 and 1999-2004. Arch. Ophthalmology 127, 12, 1632–1639. WETZSTEIN, G., LANMAN, D., HEIDRICH, W., AND RASKAR, R. 2011. Layered 3D: tomographic image synthesis for attenuation-based light field and high dynamic range displays. ACM Trans. Graph. (SIGGRAPH) 30, 4. WETZSTEIN, G., LANMAN, D., HIRSCH, M., AND RASKAR, R. 2012. Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting. ACM Trans. Graph. (SIGGRAPH) 31. WONG, T. Y., FOSTER, P. J., HEE, J., NG, T. P., TIELSCH, J. M., CHEW, S. J., JOHNSON, G. J., AND SEAH, S. K. 2000. Preva- lence and risk factors for refractive errors in adult chinese in sin- gapore. Invest Ophthalmol Vis Sci 41, 9, 2486–94. YELLOTT, J. I., AND YELLOTT, J. W. 2007. Correcting spurious resolution in defocused images. Proc. SPIE 6492.