9. Millions have poor vision, but are not getting corrected…
Kenya
2B have
refractive errors
0.6B have
5B have a URE
cell phone India
7 Billion
people
Source: World Heath Organisation, Vision 2020 Tech Report.
9
10. 2.4 Billion People w/out Glasses
who need them around the world
Billions of People with
Uncorrected Refractive Error, by
1.70
Region
1.80
1.6 Billion 2.4 Billion 1.60
1.40
1.20
1.00
0.80
0.60 0.50
0.40
0.20 0.13 0.10
0.02
0.00
Emerging Africa & Latin Europe North
Asia Middle America America
East
Source: Essilor, Infomarket 2009, CPB Research, numbers may not add due to rounding
10
17. Relaxed Eye with Myopia
Eye
Point Blurred
at infinity point
Focusing Range
perfect vision
myopia
hyperopia
infinity ~10cm
18
18. Relaxed Eye with Myopia
Eye
Pinholes
Point Distinct
at infinity image
points
Focusing Range
perfect vision
myopia
Scheiner’s Principle
hyperopia
infinity ~10cm
19
19. Relaxed Eye with Myopia
Eye
Display
A
Virtual point Distinct
at infinity image
B
points
Focusing Range
perfect vision
myopia
hyperopia
infinity ~10cm
20
20. Relaxed Eye with Myopia
Eye
Display
Move spots
towards each
other A
Distinct
image
Virtual point B points
at finite distance
Focusing Range
perfect vision
myopia
hyperopia
infinity ~10cm
21
21. Relaxed Eye with Myopia
Eye
Display
Move spots
towards each
other A
Points
overlap
Virtual point B
at finite distance
Focusing Range
perfect vision
myopia
hyperopia
infinity ~10cm
22
22. Relaxed Eye with Myopia
Eye
Display
Move spots
towards each
other A
Points
overlap
Virtual point B
at finite distance
d
Focusing Range
perfect vision
myopia
hyperopia
infinity ~10cm
23
23. Relaxed Eye with Myopia
Eye
1
d
Point at Points
infinity overlap
d
Focusing Range
perfect vision
myopia
hyperopia
infinity ~10cm
24
24. Relaxed Eye with Myopia
Eye
Display
Move spots
towards each
other c A
a Points
overlap
Virtual red point B
at finite distance
f t
d
Focusing Range
perfect vision
fa
hyperopia
myopia
d = +t
infinity ~10cm
2c 25
33. Interactive Method
Farthest Focal Point
(myopia, hyperopia, astigmatism) 34
34. Interactive Method
Farthest Focal Point
(myopia, hyperopia, astigmatism) 35
35. Interactive Method
Farthest Focal Point
(myopia, hyperopia, astigmatism) 36
36. Interactive Method
Farthest Focal Point
(myopia, hyperopia, astigmatism) 37
37. Best fitting on a Astigmatic Curve
2
P( C sin ( ) S
Unknowns: Cylinder Axis Cyl. Sphere
38
38. Interactive Method
Farthest Focal Point
(myopia, hyperopia, astigmatism) 39
39. Measuring Accommodation Range
Perfect vision
Myopia
Hyperopia
Infinity ~10cm
Step 1: Far limit Step 2: Near limit
40
40. Measuring Accommodation Range
Perfect vision
Myopia
Hyperopia
Infinity ~10cm
Step 1: Far limit Step 2: Near limit
41
41. Measuring Accommodation Range
Perfect vision
Myopia
Hyperopia
Infinity ~10cm
Step 1: Far limit Step 2: Near limit
42
42. Relaxed Eye
Display
A
Virtual Point at Points
the far limit B
overlap
43
43. Accommodated Eye
Display
Move points towards
each other
A
Points
B overlap
Virtual point
getting closer
Subject Accommodates
to fix the “blur”
44
44. Accommodated Eye
Display
Move points towards
each other
A
Points
B overlap
Virtual point
getting closer
Subject Accommodates
to fix the “blur”
45
45. Accommodated Eye
Display
Move points towards
each other
A
Points
B overlap
Virtual point
getting closer
Subject cannot
accommodate more
than the previous point
46
46. Patterns for Alignment Task
A B A B A B A B A B
Displayed
Subject view
A B A B A B A B A B
Displayed
Subject view
Visual
47
Cryptography [NaorShamir94]
47. Patterns for Alignment Task
A B A B A B A B A B
Displayed
Subject view
A B A B A B A B A B
Displayed
Subject view
Visual
48
Cryptography [NaorShamir94]
48. Patterns for Alignment Task
A B A B A B A B A B
Displayed
Subject view
A B A B A B A B A B
Displayed
Subject view
Visual
49
Cryptography [NaorShamir94]
49. Summary of Interaction
Accommodation Range
Farthest Point Nearest Point
(myopia, hyperopia, astigmatism) (presbyopia) 52
50. Device Resolution
Channel Size
25um
Resolution is a function of the display DPI
Samsung Behold II – 160 DPI = 0.35D
Google Nexus One – 250 DPI = 0.20D
Apple iPhone 4 – 326 DPI = 0.14D 53
51. Limitations
• Children
• Ability to align lines
• Resolution is a function of the display DPI
– Samsung Behold II – 160 DPI – 0.35D
– Google Nexus One – 250 DPI – 0.2D
– Apple iPhone 4G – 326 DPI – 0.14D
55
70. NETRA team at NECO
11 adults – 0.34D Average Difference
from Subjective Evaluation with no cycloplegia
-8.00 -7.00 -6.00 -5.00 -4.00 -3.00 -2.00 -1.00 0.00 1.00
1.00
0.00
-1.00
-2.00
NETRA
-3.00
-4.00
-5.00
-6.00
-7.00
-8.00
AR Subj Reference
71% of the measurements have a max error of 0.5D 74
79. CATRA: Interactive Measuring
and Modeling of Cataracts
Vitor F. Pamplona Erick B. Passos Jan Zizka Manuel M. Oliveira
Everett Lawson Esteban Clua Ramesh Raskar
MIT Media Lab – Camera Culture
86. Four Resulting Maps
Occlusion Scattering
Opacity Map Attenuation Map Contrast Map
PSF Map
(position, size) (brightness) (contrast)
C C C
C C C C C
C C C C
C C C C C
C C C
95
87. Four Stages of Interaction
Occlusion Scattering
Opacity Map Attenuation Map Contrast Map
PSF Map
(position, size) (brightness) (contrast)
C C C
C C C C C
C C C C
C C C C C
C C C
0.6mm
3mm
99
88. Forward Scattering Sensed on Fovea
Testing
Sections Projection on
the Fovea
Light Box LCD1 LCD2 Lens
100
89. Forward Scattering Sensed on Fovea
Testing
Sections Projection on
the Fovea
Light Box LCD1 LCD2 Lens
101
106. Interactive Techniques and Maps
Presence of
Brightness Test
Cataracts
(Attenuation Map)
(Binary Answer)
Position, Size
and Shape
(Opacity Map)
122
107. Interactive Techniques and Maps
Presence of
Brightness Test
Cataracts
(Attenuation Map)
(Binary Answer)
Position, Size Sub-aperture
and Shape Contrast Test
(Opacity Map) (Contrast Map)
C C C
C C C C C
C C C C
C C C C C
C C C
124
108. Contrast Test
Increasing Contrast
LCD1 LCD2 Eye Perceived Image
Rotated Low Contrast
Letter C
125
109. Contrast Test
Increasing Contrast
LCD1 LCD2 Eye Perceived Image
Rotated Low Contrast
Letter C
126
110. Contrast Test
Increasing Contrast
LCD1 LCD2 Eye Perceived Image
Rotated Low Contrast
Letter C
127
111. Contrast Test
Increasing Contrast
LCD1 LCD2 Eye Perceived Image
Rotated Low Contrast Press the right key
Letter C
128
112. Interactive Techniques and Maps
Presence of
Brightness Test
Cataracts
(Attenuation Map)
(Binary Answer)
Position, Size Sub-aperture
and Shape Contrast Test
(Opacity Map) (Contrast Map)
C C C
C C C C C
C C C C
C C C C C
C C C
129
113. Interactive Techniques and Maps
Presence of Sub-aperture
Brightness Test
Cataracts PSF Match
(Attenuation Map)
(Binary Answer) (PSF Map)
Position, Size Sub-aperture
and Shape Contrast Test
(Opacity Map) (Contrast Map)
C C C
C C C C C
C C C C
C C C C C
C C C
131
117. Point Spread Function Matching
LCD1 LCD2 Eye Perceived Image
Sub-aperture
Point Spread Function
135
118. Interactive Techniques and Maps
Presence of Sub-aperture
Brightness Test
Cataracts PSF Match
(Attenuation Map)
(Binary Answer) (PSF Map)
Position, Size Sub-aperture
and Shape Contrast Test
(Opacity Map) (Contrast Map)
C C C
C C C C C
C C C C
C C C C C
C C C
136
119. Reducing Search Space for PSF
Presence of Sub-aperture
Brightness Test
Cataracts PSF Match
(Attenuation Map)
(Binary Answer) (PSF Map)
Low
Attenuation
Position, Size Sub-aperture
and Shape Contrast Test
(Opacity Map) (Contrast Map)
C C C
C C C C C
High C C C C
Attenuation C C C C C
C C C
139
138. Awards: MIT Global Challenge & MIT Ideas Competitions
EyeCatra: $5K Winner Award
MIT Ideas Competition 2011.
EyeCatra: $5K Public Choice Award
MIT Global Challenge 2011.
165
150. Computer Generated Glasses
Focusing Here
Focusing Here
Perfect vision Myopia
Focal Range Focal Range
Subject’s Focal Point Does
Not Change
178
151. Computer Generated Glasses
Focusing Here
Focusing Here
Perfect vision
Hyperopia
Focal Range
Presbyopia
Focal Range
Subject’s Focal Point Does
Not Change
179
154. Tailoring Process
Myopic View: -3D
Focusing Here He can focus up to 33cm (12in)
Distance Display-Eye: 50cm
182
155. Tailoring Process
Myopic View: -3D
Light-field Focusing Here He can focus up to 33cm (12in)
Display
Distance Display-Eye: 50cm
183
156. Tailoring Process
Myopic View: -3D
Light-field Focusing Here He can focus up to 33cm (12in)
Display
Distance Display-Eye: 50cm
184
157. Tailoring Process
Myopic View: -3D
Light-field Focusing Here He can focus up to 33cm (12in)
Display
Pixel Size of
96um at 33cm
1-arc minute
Distance Display-Eye: 50cm
Resolution
185
160. Tailoring for Astigmatism
Subject’s prescription
-2D -1D @ 90
Light-field Two Points in Focus He focus at 30cm to 50cm.
Display
50cm
30cm
Where the
Subject’s
Accommodate
188
161. Single-Focus Multi-Depth Displays
For a given depth in focus
(accommodation),
a single object may be splitted into
anisotropic instances that are placed
at distinct depths
189
162. Wavefront Maps
90 degrees
Sphere: -2D
Cylinder: -1D
Axis: 90°
0 degrees
k
Lens focal length in k
Zernike Functions
190
193. Tailored Display Limitations
• Eyes fixed relative to the display
– Similar to 3D Displays
– Depends on the eye aberrations
• High-resolution LCD panels (PPI)
– Giga-pixel displays for monitors
• Other ocular diseases may affect our results.
226
195. Thesis Conclusions
• NETRA: Optics and UI for Refraction
– The Inverse of Shack-Hartmann Aberrometer
– Myopia, Hyperopia, Astigmatism, Focal range
– Accuracy and Resolution close to Standard Practice
• CATRA: Optics and UI for Cataracts
– Forward Scattering and Foveal Projection
– Four brand new Maps
• Tailored Displays: Compensate for Aberrations
– First-of-its-kind Multi-Depth Display
– High-order Aberrations and Cataracts
228
Notas do Editor
Everybody knows this machine, right? They call it thermometer. For me, this is the most amazing device medicine has ever used. And you know, nobody teaches on how to use a thermometer. We just somehow learned when we were kids. We started using and seeing that after the red mark something bad is going on and we need to see a doctor. It’s cheap, simple to use, no language barriers, no versions for rich and poor, it has a global spread and provides the first screen for a lot of diseases. I would guess that this guy has saved more life than anything else by just telling people when they should see a doctor.
Thanks, xxx
Everybody knows this machine, right? They call it thermometer. For me, this is the most amazing device medicine has ever used. And you know, nobody teaches on how to use a thermometer. We just somehow learned when we were kids. We started using and seeing that after the red mark something bad is going on and we need to see a doctor. It’s cheap, simple to use, no language barriers, no versions for rich and poor, it has a global spread and provides the first screen for a lot of diseases. I would guess that this guy has saved more life than anything else by just telling people when they should see a doctor.
It is a clip-on for phones that you put it on, run the app, look close, align those red and green lines and the result is your refractive error, or cataract condition. You can test your eyesight how many times you want, by yourself, anywhere, with the same accuracy an optometric tool would do today.
Now Imagine if you have a 2 dollar plastic device that you put close to your eye, press a button that says compute, and it gives you numbers that represent your nearsightedness, farsightedness and astigmatism, presbyopia and cataracts. Sort of a thermometer for vision. In the sense that since you have your number you know when you need to see a doctor. Well, what we have been working on is called NETRA, and it does exactly what I just said
2 billion people have refractive errorsAnd half a billion in developing countries worldwide have uncorrected vision that affects their daily livelihood. They don’t have access to an optometrist or it simply too expensive. While making and distributing of lenses has become quite easy now, surprisingly there isstill no easy solution for measuring eyesight.Can we use a fraction of the 4.5B cellphone displays to address this problem?
Why? Lack of felt need for eye careWhile eyeglass manufacturers/retailers can produce $0.25 eyeglasses, they have minimal cost effective, quality, or remote eye diagnostics to build demand + efficiently service those in need. Current eye testing tools are expensive, bulky, require significant training, and don’t allow for data digitization nor remote linkages to products/services.
The most accurate method is based on a so called SH WS. It involves shining a laser at the back of the retina and observing the wavefront using a sophisticated sensor.We ask user to generate a spot diagram. But navigating in a high dimensional space ischallenging so we come up with a strikingly simple approach to let the user interactively create the spotdiagram.We are first to make connection between Shack Hartmann and Lightfields (and it goes well with recentwork in computational photography about ALF and Zhang/Levoy). Connection to Adaptive optics/Astronomy. The way that this device works is that, it shines a lasers in the eye, the laser is reflected in the retina and comes out of the eye being distorted by the cornea. These light rays reaches an array of lenses that focus them to dots in a sensor. The device measures how much this dots deviate from the ideal case. Since it uses lasers, the device is expensive and requires trained professionals
For a normal eye, the light coming out of the eye forms a parallel wavefront. The sensor has a lenslet array and we get a spot diagram of uniform dots.This lenslet should remind you of a lightfield camera, and in fact Levoy and others showed last year that there is a close relationship between the two.In addition, Zhang and Levoy, plus our grp has shown the relationship between wavefront sensing and lightfield sensing.
When the eye has a distortion, the spot diagram is not uniform.And the displacement of the spots from the center indicates the local slope of the wavefront. From the slope one can integrate and recover the wave shape.
NETRA uses an exact inverse of this sensor. We get rid of the laser and we instead show the same spot diagram in a cellphone display. For normal eye, it will appear as a dot to the user.And then we replace the sensor for a light field display. If the user sees a single red dot, he does not need glasses, but if he sees more than one, he interacts with this display.
For eye with distortion, the user will interactively displace the 25 points so that he will see a single spot. Of course changing 25 spot locations is cumbersome, but we realize that there are only 3 parameters for eye-prescription and we help the user navigate thru this space efficiently.But if you think about these theory, you will realize that we have the dual of the shack-hartmann. First we though out the laser.
For eye with distortion, the user will interactively displace the 25 points so that he will see a single spot. Of course changing 25 spot locations is cumbersome, but we realize that there are only 3 parameters for eye-prescription and we help the user navigate thru this space efficiently.But if you think about these theory, you will realize that we have the dual of the shack-hartmann. First we though out the laser.
We need to measure the difference between the subject’s farthest focal point wrt infinity.
So, lets start with an eye with myopia. Remember, they cannot see far, so a red point at infinity for them will look like a red blur.
Using Shceiner’s principle, if we put two pinholes in the field, this will instead create two distinct dots.
Instead of a distant point source, we put an LCD display behind the pinholes. If we draw two spots exactly under these pin-holes, we create a virtual point at infinity.
So, as we move the two red circles toward each other, the virtual point gets closer to the subject and he sees the two red dots getting closer.
When this two red circles overlaps for the subject, we can compute d based on the spot displacements
Which is the distance between the eye and this virtual point.
Turns out that the inverse of D is the refractive power required for this person to see clearly objects at infinity. In other words, the lens that will shift the accommodation range of this subject back to the regular one.
Which is the distance between the eye and this virtual point.
And number of clicks required for alignment indicates the refractive error
In practice we display lines on the screen and the subject overlaps these lines by pressing the buttons of the cell phone or in the computer.
which is anangle-dependent refractive error. An astigmatic subject has two main focal lengths in perpendicular meridians. One …
Stronger and one weaker
Think of a cornea with the shape of an american football creating a cylindrical aberration with unknown focal length and axis.
As you can see in this video, the astigmatic lenses create a deviation on the path of the pattern, and they may never overlap, turning the alignment task into a 2D search for some angles.
However, if we drawn lines perpendicular to the measured angle, the alignment task is again an 1D search. The deviation still exists, but the pattern makes the task easier.
So, we do the alignment task for a few meridians
By showing oriented lines on the display.
In the end, we best fit the sinusoidal curve over the four measured values to estimate the astigmatic parameters.
The required correction is now a function of measured angle. In order to measure the farthest point for these guys, we need to evaluate Cylindrical component, the Spherical component, and the angle theta on the equation. However, the interpolation of refractive powers between C and S leads to a situation where the pattern drawn on the screen matters.
In the end, we best fit the sinusoidal curve over the four measured values to estimate the astigmatic parameters.
Ours is the only system where one can estimate not only the farthest point
one can focus but also
the nearest point without any mechanically moving parts. So, in order to measure the closest reading point
We draw a pattern on the screen that induces accommodation. In this way, when we move A and B closer on the screen,
the user will try to focus on a closer object. We can move this virtual point all the way to the nearest discernable point.
When the user is not able to focus anymore, the visual system give up and the user start seeing more than one pattern.
As I sad before, this is possible because we can draw whatever we want in the display. We tested many patterns, static and dynamic, including visual cryptography.
As I sad before, this is possible because we can draw whatever we want in the display. We tested many patterns, static and dynamic, including visual cryptography.
Turns out that the best pattern to induce accommodation is the sinosoidal curves aligned perpendicular to the measurement angle.
We have complete freedom for pattern G on display and the filter pattern h, which has been pin hole grid so far. But observe that subjects view is just a convolution of the pattern g and the filter h. So here is a very interesting effect. If we show this convoluted pattern with same filter, we get double convolution. If h is a broadband random dot pattern, the double convolution is a delta function, which means user will again see the pattern g.
We exploited this trick to build a viewmaster system. In this case, instead of moving lines closer we scale the pattern. The amount of scale give us the refractive power needed.
As a summary, our method has two steps. First measures the farthest point in focus in many angles using lines and the second step measures the nearest point using sinusoidals oriented on the angle of astigmatism.
Reading charts appear to be an easy solution, this method has too many problems. Sharpness of legible text is very subjective. The brightness of the chart has to be very carefully chosen otherwise the pupil size will change, increasing depth of field, and allowing user to recognize even lower rows.The trial lenses + the lens frame the doctor will use also cost over $150% Reading chart tests involve using a frame or a phoropter. The doctor will swing a sequence of lenses in front of your eye and ask for which lens allows you to see the lower rows on the reading chart.
Since we are relying on the user interaction, the subject has to be aware of the alignment tasks. So, very young Children may not be able to run the test. Instead of just one eye, one may use both eyes to exploit convergence. And of course, the resolution of NETRA itself is a function of the resolution of the display. With a 326 dpi display, resolution is 0.14 diopters and presciption glasses come in increments of 0.25 diopters. So our system is already sufficiently accurate.
Using a minification system, we performed user study with a high resolution display. Using a a camera to simulate perfect eye and a trial set of lenses to simulate lens aberration, the average spherical error was under 0.09 diopter and astigmatism axis error of 8 degrees.
We started winning awards at MIT, including MIT Ideas Award. Almost at the same time, we went to the finals of MIT 100K, business plan competition, by proposing a model to take the eye care center home. And subsequently we won a Google Grant and a Deshpande Grant.Desphande Center grant process was rigorous and competitive, forcing the early team to clearly communicate and convince technology and business leaders of applicability of the device in a real world, scaleable business. Google grant essentially came after Larry Page asked Ramesh if we could do our testing on Android – of course we can, and Larry was excited! (true story)
NETRA-G has been tried out by people from around India and the world, ranging from optical shops to hospitals.
We validate this extension by measuring the closest sharp point in cameras, and comparing with physical measurements.
The second round of validation included 6 humans. Both cases we could get pretty close to the actual closest sharp point.
We were accepted into the Launch program, an incubation program sponsored by NASA, USAID, and Nike to honor the top 5 health innovations in America.
Morgethaler Ventures, a prominent bay area VC firm held a yearly competition looking for the best Health IT companies in the world. Through hundreds of applicants, EyeNetra was chosen to be the most promising Health IT start-up in the world.
USA = $8 market eyeglass marketPoC diagnostics: $18.7 billion market by 2014Homes: Device Sales (to consumer via best buy)Royalty on eyeglasses soldPharmacies:Royalty on eyeglasses sold at the storesOpthalmologists/Optometrists:Device sales and pay per use -- Lower cost autoref
Thanks XXXNETRA is a clip-on device that you attach to your cell phone. You look close, press some buttons, you hit calculate and it gives you the prescription for glasses. It’s a 2-dollar device that measures nearsightedness, farsightedness and astigmatism with the same accuracy that doctors have in their clinic.To understand what happened here, let’s think about the evolution of photography.
Everybody knows this machine, right? They call it thermometer. For me, this is the most amazing device medicine has ever used. And you know, nobody teaches on how to use a thermometer. We just somehow learned when we were kids. We started using and seeing that after the red mark something bad is going on and we need to see a doctor. It’s cheap, simple to use, no language barriers, no versions for rich and poor, it has a global spread and provides the first screen for a lot of diseases. I would guess that this guy has saved more life than anything else by just telling people when they should see a doctor.
CATRA is a Snap-On eyepiece for mobile phones that measures and quantifies cataracts in the human eye.The patient looks tru it, respond to few patterns that are drawn on the screen, and the app generates, for the first time, maps to show occluders and their scatering profile.
We’ve been working with NGO’s on NETRA, and they reminded us that although refractive error is the second leading cause of preventable blindness, cataract is the first one. We ended up realizing that we actually were targeting one of the most prevalent diseases on this planet. In fact, allof us will have cataracts if we live long enough.
Well, cataracts are these clouds you may see in someone's eye which reflect and scatter light as it goes through on of these white blobs.For the subject's view, they create glare and blurriness.
Cataracts are detected, measured and diagnosed through this device.It is called slit-lamp microscope and is essentially a searching platform for doctors. Clinicians will change the several degrees of freedom this device has to manually search for cataracts in one's eye.This device has really not changed for several decades…
Conceptually speaking, this device is very simple. It shines a slit of light into the eye, which gets reflected in the cataract and goes back to the viewer.Clinician will se a white blob and will subjectively rate from 1 to 4 according to his notion of severity. 3 and 4 are advanced cases of the disease and suggest surgery. As you may see on this image, this method works on what we call back scattering analysis. Clinician relies on the reflex of the scattering spot which may not represent the actual effects cataracts are creating on the subject's view.
Instead of relying on someone else’s judgement of severity, CATRA works with forward scattering analysis.This snap-on can be seen as a light-field display, which when placed up close, scans the lens of the eye section by section. So, by relying on the ability to show the scattering profile of a section of the lens, we built interactive techniques to transform the visual information the subject is seen into quantifiable data.
We propose 4 maps to model occluders and replace the currently used subjective evaluation, that one from 1 to 4. The first map is what we call an opacity map. It consists of a binary information (has or has not) cataracts per section of the lens. It tell us position, size and shape of the occluders.The attenuation map is a density test per section of the eye. It tell us, how reflexive and transmissive an occluded section is. The third map is what we call contrast map. A contrast test is made per section of the eye, and tell us how big is the spreading of light from each section. The fourth map holds the point spread function per section of the eye. This four maps are divided into occlusion and scattering analisis
Based on these maps, we can simulate an individual cataract-affected vision and the progress of the disease
Based on these maps, we can simulate an individual cataract-affected vision and the progress of the disease
Based on these maps, we can simulate an individual cataract-affected vision and the progress of the disease
Now, notice that these maps are measuring a region that has about 3mm in diameter. Each section has only 6mmAnd thus any small variation on the position of the device, face or gazing will make the software miss a cataract spot.
We though a lot about it, and after several iterations, we came up with a design that relies on forward scattering and always projects patterns on fovea, so the subject will not gaze Our design is essentially a modified 3D display. We have two LCDs and a light box behind them, LCDs work as programmable masks, and an additional lens, placed one focal length from the parallax barrier. This setup allows us, for instance, to open a pinhole in LCD2 and 3 pixels on LCD1, and light rays coming out of the device will pass trough 3 testing regions on the crystallin lens and converge to a single point in the fovea. With this setup, we can alternate among testing sections without breaking the subject's visual point of reference. It does not matter which position of the lens we are testing, the subject will always see a green steady dot.
The intuition of this design relies on the role of each LCD. Each pixel on LCD1 represents a position on the crystalline, or a testing site. Each pixel on LCD2 is mapped to a position onto de fovea. So, if we want to draw visual stimulus, we draw it on LCD2, if we want to test different positions, we change LCD1.
Here is an example on a subject with cataracts. With our setup it is possible to shine a light ray that hits the cataract spot. Cataract will reflect and scatter light and a small amount of the scattered energy will reach the retina.
If the scattering is too big or the cataract is too reflexive, we can trade resolution for brightness and open neighbor pixels on LCD1 to create collimated beams of light, increasing the testing site and also the amount of energy been focused into a single point in the retina. This tool allows us to play with the scattering element and identify its properties without changing the users point of reference.
Well, the eye as any other imaging device has a point spread function. If the eye does not have cataracts, it PSF is a peak.
In case of mild cataracts, this peak decreases and the PSF assumes a gaussian profile.
In advanced cases of the disease, you cannot even see a peak al all. At this stage, the subject will not be able to notice objects in front of him.
In this work, we want to map what is on the aperture of the eye, in order to estimate the point spread function and thus compute a visual representation for an individual cataract affected eye. In order to do so, we have to figure out values for sigma and the peak of the point spread function. This values are estimated through 5 interactive techniques that run in sequence.
The first technique is a binary test for the presence of cataracts. So, yes or no if the subject has cataracts.
Although we’ve shown the single-LCD mobile phone based solution, we’re now going to move on to the general design. In our optical design, we open a pinhole in the center of LCD2 and we keep moving a pixel on LCD1. When this scanning procedure hits a cataract spot, the dot disappears and the subject realizes he has something blocking his view.
In a 2D example, we have a moving dot on screen, a pinhole open on LCD2, this will scan the lens of the eye and the subject will notice a difference between occluded and clear paths.
In a 2D example, we have a moving dot on screen, a pinhole open on LCD2, this will scan the lens of the eye and the subject will notice a difference between occluded and clear paths.
In a 2D example, we have a moving dot on screen, a pinhole open on LCD2, this will scan the lens of the eye and the subject will notice a difference between occluded and clear paths.
Let's say the subject has cataracts and thus we move forward to our maps.
The first one is the opacity map. Tell us the position, size and density of the cataract. GROUP AGAIN
On the optical scheme, it is exact the same procedure as before, a pinhole on LCD2 and a moving pattern on LCD1. However, the pattern on LCD1 moves slower and when the scanning process hit a cataract, the dot fades away and subject presses a button. By marking all regions the dot faded, app computes an opacity map.
Since now we know where cataracts are, we can now compute what are their densities.
Since now we know where cataracts are, we can now compute what are their densities.This will tell us how reflexive they are.
Now we alternate pixels on LCD1 in such a way that one point will hit a cataract spot and the second is a clear path. Subject see both alternating on his view and will decrease the intensity of the clear path, by pressing buttons on the phone, in order to match the occluded one.
In essence, subject will decrease brightness up to the point he does not notice any difference between patterns.
Again on 2D, we have alternating patterns on LCD1, still a pinhole on LCD2 and the subject will change the brightness of the clear path in order to math the intensity of the occluded path on his retina. By executing this procedure for all sections, we built an attenuation map.
Again on 2D, we have alternating patterns on LCD1, still a pinhole on LCD2 and the subject will change the brightness of the clear path in order to math the intensity of the occluded path on his retina. By executing this procedure for all sections, we built an attenuation map.
Each value on this map is an estimation for the peak of the gaussian PSF function.
Each value on this map is an estimation for the peak of the gaussian PSF function.
The fourth map is what we call a contrast map.We will conduct contrasts sensitivity tests per section of the eye. We will show a low contrast letter C, which may be rotated, and the subject will answer where C is pointing to when he notice it.
C is drawn on LCD2 and a pixel is opened on LCD1 which will make C go thought the cataract spot. Subject increases the contrast of C up to the point he notice where C is pointing to, in this case, he presses the right key.
Subject increases the contrast of C up to the point he notice where C is pointing to, in this case, he presses the right key.
Subject increases the contrast of C up to the point he notice where C is pointing to, in this case, he presses the right key.
Subject increases the contrast of C up to the point he notice where C is pointing to, in this case, he presses the right key.
The contrast map tell us sigma for the gaussian PSF function.
The contrast map tell us sigma for the gaussian PSF function.
The fifth test is computes a point spread function per section of the eye.
Just like the attenuation map, we have alternating points on LCD1, one for an occluded path and the other for a clear one. The pinhole on LCD2 for the clear path is changed for a gaussian,
which peak is read from the attenuation map. Subject will only increase sigma to match the point spread function that is been created by the occluded path. When he finishes, the drawing on LCD2 is the actual point spread function of the sub-aperture.
which peak is read from the attenuation map. Subject will only increase sigma to match the point spread function that is been created by the occluded path. When he finishes, the drawing on LCD2 is the actual point spread function of the sub-aperture.
which peak is read from the attenuation map. Subject will only increase sigma to match the point spread function that is been created by the occluded path. When he finishes, the drawing on LCD2 is the actual point spread function of the sub-aperture.
PSF maps also estimate sigma, but there is a difference between the sigma from a PSF map and from the contrast map.
PSF maps also estimate sigma, but there is a difference between the sigma from a PSF map and from the contrast map.
If respective attenuation value is high subject may not be able to match the point spread function accurately. So, the contrast map replaces the PSF matching.
And thus our algorithm follows these steps, one after the other, with a decision point after the brightness test. In the remaining of the talk, Erick will show our prototypes, validations and how to compute an individual cataract affected view.
ThanksVitor.We’ve built several prototypes…
This one is made of a pair of stacked LCDs. We disassembled and re-builtthese high-contrast monochromaticmedical displays.Interaction is made though a keyboard.
This one is made of a pair of stacked LCDs. We disassembled and re-builtthese high-contrast monochromaticmedical displays.Interaction is made though a keyboard.
This one is made of a pair of stacked LCDs. We disassembled and re-builtthese high-contrast monochromaticmedical displays.Interaction is made though a keyboard.
Thanks Vitor…Here is the first prototype. It is composed of a DLP projector with a diffuser, a pinhole mask, and an eyepiece where the subject should look into. Interaction is made though the keys of a laptop.
The simplest possible setup, you already saw it, a clip-on for mobile phones with a pinhole mask on top of the display. This mobile phone prototype can only generate opacity and attenuation maps. With the stacked LCDs one, we implemented the full interactive procedure.
We validated our method in 3 steps.Firstly we validated the technique itself, and how accurate our interactive method can be under a highly controlled environment.We added diffusers to camera lenses to simulate cataracts and computed, in this example, attenuation and point spread function maps.
Here is another example: a picture of the simulated cataracts, opacity map, and the measured attenuation map, which was created by taking per-section pictures and summing up pixels on the resulting image. The estimated attenuation map matches the actual value.
In the second step, we validated alignment and gaze control.So, if you’re a young graduate student without cataracts, how would you do the experimentation? I, for instance, don’t have cataracts…Guess what, we SCRATCHED these contact lenses to create simulations for advanced, mild and early cataracts, and we wore them, er… Vitor did, since I wasn’t brave enough.For instance, we could successfully measure the size of a simulated cataract of about 0.5mm^2 as 0.45mm^2.WHOLE FACE PICTURES INSTEAD
In the last step, we tested how elderly subjects interacted with our device. 18 people took the test: 5 of them had early cataracts, all confirmed through our method; and 1 individual discovered he had cataracts with our device, which was confirmed afterwards by an ophthalmologist.
Well, now that we know how to measure approximate point spread functions per sub-aperture, we can build a single point spread function for the eye by summing up all of them. However, because of cataracts, eye’s PSFs are depth dependent
, so for scenes with objects out of focus, there is a shift to be applied for each sub-aperture PSF that is proportional to the distance from the focal plane.
So, here is a scenario for our rendering.
A photography taken with a cataract-affected lens
And a simulation using our estimated PSFs.
In some out of focus objects, you may even see the cataract shape inside the bokeh effect.
Our technique has a few limitations.It requires active user participation, so if the user cannot understand our procedures, he may not get reliable results.We need a clear path in the lens in order to estimate the attenuation, contrast and PSF of occluded paths.Retinal diseases may change the results as well.
In summary, we have introduced a co-design of optics and interactive techniquesfor measuring cataracts which works with forward scattering analysis and holds gaze through foveal projection…We proposed 4 quantitative maps to replace the subjective evaluation doctors currently use...And we developed the first simulation for an individual cataract-affected vision.
Everybody knows this machine, right? They call it thermometer. For me, this is the most amazing device medicine has ever used. And you know, nobody teaches on how to use a thermometer. We just somehow learned when we were kids. We started using and seeing that after the red mark something bad is going on and we need to see a doctor. It’s cheap, simple to use, no language barriers, no versions for rich and poor, it has a global spread and provides the first screen for a lot of diseases. I would guess that this guy has saved more life than anything else by just telling people when they should see a doctor.
Which is the distance between the eye and this virtual point.
Which is the distance between the eye and this virtual point.
Which is the distance between the eye and this virtual point.
Which is the distance between the eye and this virtual point.
Which is the distance between the eye and this virtual point.
Which is the distance between the eye and this virtual point.
Which is the distance between the eye and this virtual point.
Which is the distance between the eye and this virtual point.
Which is the distance between the eye and this virtual point.
Which is the distance between the eye and this virtual point.
Which is the distance between the eye and this virtual point.
Which is the distance between the eye and this virtual point.
E estesistemafecha a minhatesequedescreve o ciclo com 3 tecnologias de medi’c~ao e correcao. Nao e dificilimaginarque no futuro as pessoaspossammedirseussistemasvisuaisemqualquerlugar, postar num facebookdavida, e obter displays corrigidospara voce emtodososlugares. Entao 4 anos de doutorado e 300 mil reais de investimento, euesperancosamentereceberei um novo papel, quedizque agora emposso ser chamado de doutor.