The document summarizes research on supporting mobility for the blind through technology. It discusses a blind person's daily activities like entertainment, socializing, learning, errands and navigation. It outlines the abilities of blind users and available technologies like RFID tags, computer vision, touchscreens and text-to-speech. Significant research areas are described, such as navigation tools, learning aids and making the web more accessible. The timeline of assistive technologies and future opportunities like accessible programming tools are also mentioned.
Supporting mobility for the blind a broad lit review
1. Supporting Mobility for the Blind
-A literature review
By
Debaleena Chattopadhyay
I624, Fall 2012
2. Putting the user in context
- a day in our user’s life
Entertainment Daily errands
watch videos choosing what to wear
play games before leaving home.
browse the web taking notes in class
take pictures Drawing
[1], [3], [6], [29], [15]. [11], [28], [31].
Socializing
Use the social network on
the web/ phone. Navigation
Talk to peers and family. walk to places like school, library.
[3], [8]. take public transits like bus, train.
Learning/ Education cross roads while walking.
find points-of-interest in
Improve Orientation and Mobility (OEM) locations nearby.
learn subjects like Mathematics. [6], [12], [16], [19], [18], [30].
learn how to program a software.
[9], [20], [21], [22], [23], [24], [25], [26], [27].
3. Ability-based Design
-What can a blind user do?
A blind user has an increased sense of hearing. Studies
suggest that they can understand fast-paced, synthesized
speech, significantly better than visually-abled users. [4]
A blind user has the ability to use haptic cues like
vibrations from gesture interaction or multi-finger
touches.
A blind user can make sense of tactile feedback, like
differentiate between doors, textured walls, glass etc.
A blind user can smell objects and infer contexts like
kitchen, dining room, bakery etc.
A blind user can taste.
4. Off-the-shelf Technologies
-What tools we have?
RFID (Radio-frequency Identification tags) technology. [5]
Computer Vision algorithms. [3, 8, 9, 12, 16, 17]
Multi-touch screens available through smart-phones. [2, 6, 7, 9,
10, 12, 17]
Powerful text-to-speech algorithms using artificial intelligence
(AI). [4, 21]
Commodity computing power (Cheap add-on hardware systems
like smartphones). [12, 13, 16]
Crowdsourcing. Human collaboration in real time to assist in
technical endeavors. [1, 13]
5. Significant Research
-What are we up-to?
Creating navigational tools.
Help users to navigate autonomously as much
as possible in outdoors and indoors, to help
them find points of interest and public transit
options. [6], [12], [16], [19], [18], [30].
Create learning aids.
Helping users to improve OEM (orientation
and mobility) by games. Create tangible user
interfaces to learn equations in mathematics
or use fingertip vibration to recognize 3D
surfaces like graphs. [9], [20], [21], [22], [23],
[24], [25], [26], [27].
6. Significant Research
-What are we up-to?
Making the web more accessible.
Though an age-old initiative, with the emerging
trends in the web, researchers are coming up
with new ways to make it most accessible to
the visually impaired. Several guidelines are
often evaluated and iterated for betterment.
[2], [4], [15].
Crowdsourcing resources.
A new trend has emerged to engage sighted
users into providing meaningful information
about the environment that can be readily
consumed by visually impaired users. [13].
7. The Timeline
-How far did we come?
90s
Assistive technologies like crossing aids
for pedestrians.
Universal design like accessibility
guidelines for the web for visually
impaired.
Universal usability voice-over
techniques in Apple iPod.
Ability-based designs like haptic-based
or auditory-based interfaces.
Present
8. An opportunity to contribute
-What’s in store?
Blind Software developers: Do we need the Daredevil?
A lot of visually impaired users are as interested in
developing and programming software applications as
sighted users. Surprisingly research efforts in making
accessible developing tools like IDE or Visual
programming languages seems to be scant. [21], [22],
[23], [24], [25], [26], [27].
An interesting research opportunity would be
helping the blind programmers by understanding their
requirements and building an effective programming
environment.
Also how to make the domain of visual programming
accessible to the blind user is worth pursuing.
9. HCI Implications
-What does this lead to?
• Focus on the abilities of visually impaired
users and increase the adaptivity and
adaptibility of the software systems.
• Make use of the commodity hardware
and software systems to make
technology affordable to users.
• Iteratively define and refine
requirements of blind users to increase
their mobility.
10. Research Challenges
-Risky business.
• Making accessible software systems need
more man-hours and a different design
approach. Can we afford it?
• Visually impaired people are always trying to
fit in. How to increase the adoption rate of
new accessible technologies without hurting
their self-esteem?
• How much is the reliability of such systems?
• How would we take care of subject variables
(individual differences) in the visually
impaired population?
11. References
1. Benoît Encelle, Magali Ollagnier-Beldame, Stéphanie Pouchot, and Yannick Prié. 2011. Annotation-based video enrichment for blind
people: A pilot study on the use of earcons and speech synthesis. In The proceedings of the 13th international ACM SIGACCESS
conference on Computers and accessibility (ASSETS '11). ACM, New York, NY, USA, 123-130.
2. João Oliveira, Tiago Guerreiro, Hugo Nicolau, Joaquim Jorge, and Daniel Gonçalves. 2011. Blind people and mobile touch-based text-
entry: Acknowledging the need for different flavors. In The proceedings of the 13th international ACM SIGACCESS conference on
Computers and accessibility (ASSETS '11). ACM, New York, NY, USA, 179-186.
3. Chandrika Jayant, Hanjie Ji, Samuel White, and Jeffrey P. Bigham. 2011. Supporting blind photography. In The proceedings of the
13th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '11). ACM, New York, NY, USA, 203-210.
4. Amanda Stent, Ann Syrdal, and Taniya Mishra. 2011. On the intelligibility of fast synthesized speech for individuals with early-onset
blindness. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '11).
ACM, New York, NY, USA, 211-218.
5. Hugo Fernandes, José Faria, Hugo Paredes, and João Barroso. 2011. An integrated system for blind day-to-day life autonomy. In The
proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '11). ACM, New York, NY,
USA, 225-226.
6. Jaime Sánchez and Matías Espinoza. 2011. Audio haptic videogaming for navigation skills in learners who are blind. In The
proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '11). ACM, New York, NY,
USA, 227-228.
7. Takato Noguchi, Yusuke Fukushima, and Ikuko Eguchi Yairi. 2011. Evaluating information support system for visually impaired people
with mobile touch screens and vibration. In The proceedings of the 13th international ACM SIGACCESS conference on Computers
and accessibility (ASSETS '11). ACM, New York, NY, USA, 243-244.
8. Douglas Astler, Harrison Chau, Kailin Hsu, Alvin Hua, Andrew Kannan, Lydia Lei, Melissa Nathanson, Esmaeel Paryavi, Michelle Rosen,
Hayato Unno, Carol Wang, Khadija Zaidi, Xuemin Zhang, and Cha-Min Tang. 2011. Increased accessibility to nonverbal
communication through facial and expression recognition technologies for blind/visually impaired subjects. In The proceedings of the
13th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '11). ACM, New York, NY, USA, 259-260.
9. Muhanad S. Manshad, Enrico Pontelli, and Shakir J. Manshad. 2011. MICOO (multimodal interactive cubes for object orientation): a
tangible user interface for the blind and visually impaired. In The proceedings of the 13th international ACM SIGACCESS conference
on Computers and accessibility (ASSETS '11). ACM, New York, NY, USA, 261-262.
10. Joy Kim and Jonathan Ricaurte. 2011. TapBeats: accessible and mobile casual gaming. In The proceedings of the 13th international
ACM SIGACCESS conference on Computers and accessibility (ASSETS '11). ACM, New York, NY, USA, 285-286.
12. References
11. Michele A. Burton. 2011. Fashion for the blind: a study of perspectives. In The proceedings of the 13th international ACM SIGACCESS
conference on Computers and accessibility (ASSETS '11). ACM, New York, NY, USA, 315-316.
12. Markus Guentert. 2011. Improving public transit accessibility for blind riders: a train station navigation assistant. In The proceedings of
the 13th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '11). ACM, New York, NY, USA, 317-318.
13. Sanjana Prasain. 2011. StopFinder: improving the experience of blind public transit riders with crowdsourcing. In The proceedings of the
13th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '11). ACM, New York, NY, USA, 323-324.
14. Jonathan Lazar, Jinjuan Feng, Tim Brooks, Genna Melamed, Brian Wentz, Jon Holman, Abiodun Olalere, and Nnanna Ekedebe. 2012.
The SoundsRight CAPTCHA: an improved approach to audio human interaction proofs for blind users. In Proceedings of the 2012 ACM
annual conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 2267-2276.
15. Christopher Power, André Freire, Helen Petrie, and David Swallow. 2012. Guidelines are only half of the story: accessibility problems
encountered by blind users on the web. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems
(CHI '12). ACM, New York, NY, USA, 433-442.
16. Richard Guy and Khai Truong. 2012. CrossingGuard: exploring information content in navigation aids for visually impaired pedestrians.
In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 405-
414.
17. Koji Yatani, Nikola Banovic, and Khai Truong. 2012. SpaceSense: representing geographical information to visually impaired people
using spatial tactile feedback. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI '12).
ACM, New York, NY, USA, 415-424.
18. Navid Fallah, Ilias Apostolopoulos, Kostas Bekris, and Eelke Folmer. 2012. The user as a sensor: navigating users with visual impairments
in indoor spaces using tactile landmarks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems
(CHI '12). ACM, New York, NY, USA, 425-432.
19. Shaun K. Kane, Chandrika Jayant, Jacob O. Wobbrock, and Richard E. Ladner. 2009. Freedom to roam: a study of mobile device adoption
and accessibility for people with visual and motor disabilities. In Proceedings of the 11th international ACM SIGACCESS conference on
Computers and accessibility (Assets '09). ACM, New York, NY, USA, 115-122.
20. Emma Murphy, Enda Bates, and Dónal Fitzpatrick. 2010. Designing auditory cues to enhance spoken mathematics for visually impaired
users. In Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '10). ACM, New
York, NY, USA, 75-82.
13. References
21. Robert M. Siegfried. 2006. Visual programming and the blind: the challenge and the opportunity. In Proceedings of the 37th SIGCSE
technical symposium on Computer science education (SIGCSE '06). ACM, New York, NY, USA, 275-278.
22. Jaime Sanchez and Fernando Aguayo. 2005. Blind learners programming through audio. In CHI '05 extended abstracts on Human factors
in computing systems (CHI EA '05). ACM, New York, NY, USA, 1769-1772.
23. Stephen W. Mereu and Rick Kazman. 1996. Audio enhanced 3D interfaces for visually impaired users. In Proceedings of the SIGCHI
conference on Human factors in computing systems: common ground (CHI '96), Michael J. Tauber (Ed.). ACM, New York, NY, USA, 72-
78.
24. Jaime Montemayor. 2001. Physical programming: software you can touch. In CHI '01 extended abstracts on Human factors in
computing systems (CHI EA '01). ACM, New York, NY, USA, 81-82.
25. Waltraud Schweikhardt. 1982. A programming environment for blind APL-programmers. In Proceedings of the international conference
on APL (APL '82). ACM, New York, NY, USA, 325-331.
26. Kenneth G. Franqueiro and Robert M. Siegfried. 2006. Designing a scripting language to help the blind program visually. In Proceedings
of the 8th international ACM SIGACCESS conference on Computers and accessibility (Assets '06). ACM, New York, NY, USA, 241-242.
27. Robert F. Cohen, Arthur Meacham, and Joelle Skaff. 2006. Teaching graphs to visually impaired students using an active auditory
interface. In Proceedings of the 37th SIGCSE technical symposium on Computer science education (SIGCSE '06). ACM, New York, NY,
USA, 279-282.
28. David S. Hayden, Liqing Zhou, Michael J. Astrauskas, and John A. Black, Jr.. 2010. Note-taker 2.0: the next step toward enabling students
who are legally blind to take notes in class. In Proceedings of the 12th international ACM SIGACCESS conference on Computers and
accessibility (ASSETS '10). ACM, New York, NY, USA, 131-138.
29. Masatomo Kobayashi, Trisha O'Connell, Bryan Gould, Hironobu Takagi, and Chieko Asakawa. 2010. Are synthesized video descriptions
acceptable?. In Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '10). ACM,
New York, NY, USA, 163-170.
30. Jaime Sánchez and Natalia de la Torre. 2010. Autonomous navigation through the city for the blind. In Proceedings of the 12th
international ACM SIGACCESS conference on Computers and accessibility (ASSETS '10). ACM, New York, NY, USA, 195-202.
31. Patrick C. Headley and Dianne T. V. Pawluk. 2010. A multimodal, computer-based drawing system for persons who are blind and visually
impaired. In Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility (ASSETS '10). ACM, New
York, NY, USA, 229-230.