Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
The Design of Human-Powered Access Technology
1. The Design of Human-Powered
Access Technology
Jeffrey P. Bigham
University of Rochester
Richard E. Ladner
University of Washington
Yevgen Borodin
Stony Brook University
2. Introduction History Examples Dimensions Application
Human-Powered Access Technology –
technology that facilitates and, ideally,
improves interactions between disabled
people and human assistants
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
3. Introduction History Examples Dimensions Application
Human Power in History
• People Rely on Assistance from Others
– to overcome small accessibility problems
– prevent small challenges from becoming bigger
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
4. Introduction History Examples Dimensions Application
Managing Expectations
• Structures Around Assistance
– sign language interpreters
– volunteer training / accountability
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
5. Introduction History Examples Dimensions Application
Remote Services
• What has changed is Connectivity
Connectivit
y
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
6. Introduction History Examples Dimensions Application
Remote Assistance
Video Relay Services
Real-time
Captioning
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
7. Introduction History Examples Dimensions Application
Crowdsourcing / Human Computation
For an overview see:
[1] Quinn and Bederson.
“Human computation: a
survey and taxonomy of a
growing field. CHI 2011.
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
8. Introduction History Examples Dimensions Application
Bigham et al. Nearly Real-Time Answers to Visual Questions. UIST 2010.
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
9. Introduction History Examples Dimensions Application
Examples
ESP Game
VizWiz
Social Accessibility Project
Solona
IQ Engines / oMoby
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
10. Introduction History Examples Dimensions Application
Examples
MAP Lifeline
ESP Game
Tactile Graphics Project
VizWiz
GoBraille Social Accessibility Project
Remote Real-Time
Respeaking
Reading Service
Solona
ASL-STEM Forum Remote Real-Time
IQ Engines / oMoby Captioning
Bookshare
Scribe4Me Video Relay Services
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
11. Introduction History Examples Dimensions Application
Design Dimensions
Intitiative: who initiates help?
• End User
• Worker
• Organization
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
12. Introduction History Examples Dimensions Application
Design Dimensions
Latency: how long does it take to get help?
• Interactive
• Short Delay
• Undetermined
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
13. Introduction History Examples Dimensions Application
Design Dimensions
Confidentiality: user expectations
• Trusted Worker Pools
• User Feedback
• No Guarantees
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
14. Introduction History Examples Dimensions Application
Design Dimensions
Broader Context:
• User
• Worker
• Community
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
15. Introduction History Examples Dimensions Application
vs.
-- two systems that have sighted people describe web images for blind people --
Similarities Differences
Functionality Experts vs. Crowd
Target Disability: Blind Latency
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
16. Introduction History Examples Dimensions Application
VizWiz
vs. Scribe4Me
-- different target disabilities but similar goal --
Similarities Differences
Latency Target Disability
User Initiative Accuracy
Source
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
17. Introduction History Examples Dimensions Application
Areas for Future Research
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
18. Introduction History Examples Dimensions Application
Areas for Future Research
• Latency
Latency
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
19. Introduction History Examples Dimensions Application
Areas for Future Research
• Latency
• Broader Context
Broader Context
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
20. Introduction History Examples Dimensions Application
Areas for Future Research
• Latency
• Broader Context
• Other Disabilities
Other Disabilities
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
21. Introduction History Examples Dimensions Application
Conclusion
• Human-Powered Access Technology
• Identified 15 Examples
• Isolated 13 Design Dimensions
• Useful for Evaluating, Comparing, Motivating
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
22. Introduction History Examples Dimensions Application
due
12/21/2011
http://www.gccis.rit.edu/taccess
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
23. Introduction History Examples Dimensions Application
crowdability.org
University of Rochester Human-Computer Interaction Jeffrey P. Bigham
Hi everyone, I’m Jeff Bigham from the University of Rochester.Today, I’m going to be talking about the design of what we call “Human-Powered Access Technology.”This paper is joint work with Richard Ladner from the University of Washington, and Eugene Borodin from Stony Brook University.
First, we define human-powered access technology as “technology that facilitates and, ideally, improve interactions between disabled people and human assistants”The goal of our paper was to motivate the idea of human-powered access technology, and to put forth a taxonomy that would help those of us working on technologies in this space to more easily1. talk about our work,2. evaluate and compare new work with what has come before3. suggest directions within this area that might be good targets for future research.
It turns out that people with disabilities have relied on the assistance of people in their communities to overcome small accessibility problems experienced in everyday life for centuries.a blind person may ask a fellow traveler for the number of an approaching bus,or a person with a motor impairment may ask assistance with small physical tasks.Individually, these are small challenges, but the assistance provided helps to prevent these small problems from beginning bigger ones.
Initially, this help was provided informally – for instance, members of a religious congregation may provide informal sign language interpreting.But, far from being passive recipients of this help, people with disabilities have formed organizing structures around the assistance they receive in order to ensure that their expectations are met.For instance, sign language interpreters agree to strict confidentiality agreements that prevent them from injecting their own comments and prevent them from repeating conversations.Volunteers are often trained, and held accountable for the assistance they provide.Almost all professional organizations providing services to people with disabilities have a code of ethics that requires confidentiality, respect for customers, and responsibility of assistants to only take on jobs for which they are qualified.
What has changed is connectivity – no longer must an assistant be co-located to provide support.
Leading the way of remote human assistance were people with disabilities – in the form of such technologies as video relay services, which connect sign language speakers to hearing people on the phone.And remote real-time captionists who can transcribe live events remotely.
To us, this sort of technology presaged the recent popularity of crowdsourcing and human computation in computing.The main idea behind these areas are that there are still some things that people can do better than computers, and that crowds can do some things even better than individuals.I won’t talk extensively about these areas, but for a survey see the really great paper by Quinn and Bederson from this year’s CHI conference.
As an example, consider VizWiz, an application that my group developed.VizWiz is an iPhone application that lets blind people take a picture, speak a question they’d like to know about it, and receive answers from multiple people (aka, the crowd) quickly (generally in less than a minute).As we were creating this application, we made a number of design decisions. We knew we wanted to make it fast – a lot of time, when blind people need to know something about their environment, they need to know quickly – think of reading a menu in a restaurant. To make it work quickly and cheaply, we made some tradeoffs – we solicited answers from several non-skilled workers (we first used primarily Mechanical Turk, and now we also employ volunteers). Some answers they provide may not be correct (or ideal), but the user is likely to be able to make sense of them. These workers are not professionals, and so we cannot guarantee that they will treat the photo confidentially, so each VizWiz user sees a disclaimer warning them of this to help align their expectations with the reality of what the tool does.To give further context, VizWiz is up on the App Store and has now been use to answer over 20,000 questions.
As we were designing and implementingVizWiz, we did so aware of what had come before – and many tools seemed to do something similar to VizWiz.For instance, one of the goals of the ESP Game was to label web images for blind people. But, it wasn’t quite the same – users were unable to take the initiative to directly submit an image for labeling and the crowd provided a label, not an answer to a question.SocialAccessibility gives blind users the initiative to, among other things, request a description for web images. In some ways, this is similar to VizWiz, but latency is not guaranteed, and so you might have to wait quite a while to have your image described.Solona is much closer – with Solona, you can take a screen shot of your desktop, and an expert drawn from a small pool answers it. Solona advertised a latency of 30 minutes, but because the service relied on a small pool, often times no answerer was available…and they have since shut down.IQ Engines is a start-up company that provides an image description API that uses both humans and computer vision. For their crowd they employ a small call center of workers whose job is to describe images extremely quickly.
As you can see, there is a lot of work – our claim is that it’s somewhat difficult to really talk about and compare these related technologies because we don’t have a good framework in which to do so.Especially, when we broaden to other technologies:Scribe4Me was a research project that provides auditory information for deaf people with about 5 minutes latency.Video relay services we’ve seen engage a remote interpreter in real-time, real-time captionists transcribe audio.Real-time Reading Services was a concept out of the Smith Kettlewell Eye Institute in which blind people could fax pictures for description.MAP Lifeline is a technology for people with cognitive impairments that allows caregivers to inject prompts in real-time based on sensory informationAnd then there’s a whole host of more asynchronous tools that use people -- community-driven resources like GoBraille, the ASL-STEM forum, and Bookshare in which disabled people themselves share information to help make the world more accessible for themselves but at the same time each other. And tools like Respeaking and the Tactile Graphics project in which technology helps human workers create more accessible information.
Starting with a large number of examples of human-powered access technologies, we isolated 13 design dimensions you see here that we believe can be useful in talking about technology in this space.The dimensions and values provided here are not necessarily meant to be comprehensive, but rather to serve as a starting point.I don’t have time to go through them all, but I’ll go through a fewFor instance, Initiative refers to who instigates assistance:The end user often decides when to solicit help from human supporters. Examples include services like remote real-time captioning and relay services, and crowdsourcing systemslike Social Accessibility and VizWiz.Systems like Bookshare and Go Braille allow the human supporters to decide when and what information they will provide. For instance, what books will be scanned as part of Bookshare or what landmarks they will label in Go Braille.Groups of people will sometimes decide to solicit the help of human workers or to guide their efforts. For instance, workers are often recruited to contribute specific signs in the ASL-STEM Forum.
As another example, latency refers to how long it takes to get assistance.Some tools are designed to be interactive, and other asynchronous. While it may seem that low latency would also be best, I should point out here that there is no single best answer for each dimension, and that different tools make different trade-offs.For some tools latency might be a primary target, as it is with VizWiz. In others it may be viewed as less important. For instance with Bookshare, it matters more that a book is made available in an accessible form eventually – it doesn’t necessarily have to happen right away.
Confidentiality.Primarily we see this one as an example of setting appropriate user expectations – relay services manage this by using trusted pools who have agreed to confidentiality agreements, applications like VizWiz instead tell users what to expect, and other tools simply make no guarantees.
Most of the focus in human-powered access technology is rightly on the end user. For instance, technology may go to great lengths to help protect their identity or ensure an appropriate user experience. Often, the effects on others in the broader context in which the technology is used are ignored.For instance, in the case of VizWiz, bystanders may unwittingly find themselves in the lens of a blind user.Or workers, may be asked to answer a question with consequences – for instance, VizWiz workers may be asked to decipher a medicine bottle.
Our framework allows us to more easily compare different human-powered access technologies.For instance, both social accessibility and solona describe web images for blind people – their functionality is similar and their target disability is similar, but they differ on where the workers come from and the expected latency of getting back an answer.
VizWiz and Scribe4me are similar in the expected latency and the fact that user’s initiate the service, but they differ in the targeted disability, method for ensuring accuracy (VizWiz recruits multiple answers, Scribe4Me uses experts), and the source of workers.
Plotting our example technologies along with the values for these dimensions also reveals possibilities for future work.
There is a big opportunity to make human-powered access technology faster.Many of the technologies in this space with low latency, require workers to be pre-recruited – for instance, real-time remote captioning..
To consider the broader context in which it will be used – especially considering the workers and the broader community.
And, finally, most of our examples were drawn from sensory disabilities (with the exception of MAP Lifeline). We believe that there is an opportunity to expand human-backed access technology to people with other types of disabilities.
In conclusion, I have introduced the idea of human-powered access technology – technology that facilitates and, ideally, improves interactions between disabled people and human assistants.I’ve described 15 example technologies, and isolated 13 design dimensions from these examples that may help us to discuss technology in this space.My view is that human power has the potential to greatly improve access for a large number of people with disabilities today.Our hope in writing this paper, and my hope in presenting this talk, is that articulating these dimensions may improve research in this area by helping us to better evaluate and compare new technologies, and motivate research into technologies insufficiently covered by existing work.
If you’re doing work in this area, as I think many of you are, I encourage you to submit to the special issue of TACCESS that I’m guest editing on “Crowdsourcing Accessibility.” Despite the title, I’m interested in any technology that broadly meets the definition of human-powered access technology that I’ve outlined in this talk.
I thank you for your time, and am happy to answer any questions now or when I see you around the conference over the next few days.