https://www.insight-centre.org/content/effects-expertise-assessment-quality-task-routing-human-computation
Presented at SoHuman'12
Abstract:
Human computation systems are characterized by the use of human workers to solve computationally difficult problems. Expertise profiling involves assessment and representation of a worker’s expertise, in order to route human computation tasks to appropriate workers. This paper studies the relationship between the assessment workload on workers and the quality of task routing. Three expertise assessment approaches were compared with the help of a user study, using two different groups of human workers. The first approach requests workers to provide self-assessment of their knowledge. The second approach measures the knowledge of workers through their performance against tasks with known responses. We propose a third approach based on a combination of self-assessment and task-assessment. The results suggest that the self-assessment approach requires minimum assessment workload from workers during expertise profiling. By comparison, the task-assessment approach achieved the highest response rate and accuracy. The proposed approach requires less assessment workload, while achieving the response rate and accuracy similar to the task-assessment approach.
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Effects of Expertise Assessment on the Quality of Task Routing in Human Computation
1. Copyright 2010 Digital Enterprise Research Institute. All rights reserved.
Digital Enterprise Research Institute www.deri.ie
EFFECTS OF EXPERTISE ASSESSMENT ON THE
QUALITY OF TASK ROUTING IN HUMAN
COMPUTATION
Umair ul Hassan, Sean O’Riain, Edward Curry
Digital Enterprise Research Institute
National University of Ireland, Galway
International Workshop on Social Media for Crowdsourcing
and Human Computation - SoHuman’13, Paris, France
2. Digital Enterprise Research Institute www.deri.ie
Agenda
Paper Overview
Motivation
Human Computation
Task Routing
Challenges of Push Routing
Experiment
Use case
Methodology
Results
Summary
2
3. Digital Enterprise Research Institute www.deri.ie
Paper Overview
Motivation
People have differing levels of expertise
Effective task routing requires expertise information
Expertise profiling involves assessment
Problem
How to assess worker’s expertise for generating profiles?
How to reduce costs of expertise assessment while attaining higher
quality of task routing?
Contribution
Comparison of self-assessment and task-assessment approaches
A hybrid approach, based on combination of self-assessment and task
assessment, for cost reduction
3
4. Digital Enterprise Research Institute www.deri.ie
Human Computation
Solve computationally hard problems with help of humans
Algorithms control human workers
Computation is carried out by Humans
4
* Barowy et al, “AutoMan: a platform for integrating human-based and digital computation,” OOPSLA ’12
Define Compute
Algorithm
Developer
Workers
5. Digital Enterprise Research Institute www.deri.ie
Human Computation
5
* Edith Law and Luis von Ahn, Human Computation - Core Research Questions and State of the Art
Input Output
Task Router
before computation
Output Aggregation
after computation
Task Design
during computation
Our Focus
6. Digital Enterprise Research Institute www.deri.ie
Task Routing
Pull Routing
System provides an interface to support workers
Workers actively seek tasks and assign to themselves
6
Workers
Tasks Select
Result
Algorithm
Search & Browse Interface
* www.mtruk.com
Result
7. Digital Enterprise Research Institute www.deri.ie
Task Routing
Push Routing
System has complete control over assignment of tasks
– Based on criteria such as expertise, cost, and latency
Workers passively receive tasks
7
Workers
Tasks
Assign
Result
Assign
Algorithm
Task Interface
* www.mobileworks.com
Result
8. Digital Enterprise Research Institute www.deri.ie
Challenges of Push Routing
Workers have different domain knowledge and expertise
1. How to define the expertise requirements of a task? And how
to model the expertise profile of a worker?
2. How to profile the expertise of human workers, via suitable
expertise assessment methods with minimum cost?
3. How to leverage the expertise profiles of workers for effectively
routing tasks , resulting in quality responses?
8
9. Digital Enterprise Research Institute www.deri.ie
Routing
Profiling
Knowledge
Profile
Tasks
Performance
Profile
3.
Test
Tasks
1.
Concepts
Routing
Model
5.
New
Tasks
2.
Self
Assessment
4.
Task
Assessment
6.
Routed
Tasks
Workers
Two phase process
Steps of push routing using worker profiles
9
Cost of assessment
for profiling
Quality of profiles for
routing
10. Digital Enterprise Research Institute www.deri.ie
Use case: Verification Tasks
Data quality in DBpedia
Verification of new facts for DBpedia
10
Concept
related to
the task
11. Digital Enterprise Research Institute www.deri.ie
Use case: Verification Tasks
11
Update: Missing Value
dbpedia-owl:writer =
dbpedia:Akiva_Goldsman
SKOS Concepts:
American_biographical_films
Films_set_in_the_1950s
Worker Expertise
SKOS Concepts:
Films_set_in_the_1950s (Good)
Films_about_psychiatry (Poor)
American_drama_films (Fair)
Data Quality Algorithm
Workers & Expertise Model
Entity: A Beautiful Mind
SKOS Concepts:
American_biographical_films
Films_set_in_the_1950s
Property & Values:
dbpedia-owl:Work/runtime
135.0
dbpedia-owl:director
dbpedia:Ron_Howard
dbpedia-owl:producer
dbpedia:Ron_Howard
dbpedia:Brian_Graze
dbpedia-owl:starring
dbpedia:Ed_Harris
dbpedia:Russell_Crowe
Source Data
Task: Confirm Missing Value
Did Akiva Goldsman wrote the
movie "A Beautiful Mind"?
SKOS Concepts:
American_biographical_films
Films_set_in_the_1950s
Task Routing
Match
American_biographical_films
American_drama_films (Fair)
Task Model
Routing Model* SKOS = Simple Knowledge
Organization System
12. Digital Enterprise Research Institute www.deri.ie
Use case: Verification Tasks
Datasets based on films
related entities from hollywood
and bollywood
Distribution of tasks against
number of concepts per task
12
Dataset Characteristics
Movies
Dataset
Actors
Dataset
Total entities 724 14
Total concepts 42 14
Total tasks 230 120
Avg. tasks per concept 9 8.6
Avg. concepts per task 1.64 1 0
20
40
60
80
100
120
140
160
1 2 3 4 5
No.ofTasks
No. of Concepts per Task
13. Digital Enterprise Research Institute www.deri.ie
Profiling
13
Concept
c1: Buddy films 0.6 0.2 0.2
c2: Gang films 0.6 0.2 0.6
c3: Horror films 0.8 0.4 0.4
c4: Comedy films 0.8 0.6 0.6
17. Digital Enterprise Research Institute www.deri.ie
Knowledge workers
Volunteers having varying
knowledge about films
Hollywood vs. Bollywood
Survey before and after
participation
17
Movies
Dataset
Actors
Dataset
No. of knowledge
workers (volunteers) 11 26
No. of knowledge
concepts 42 14
No. of test tasks
(profiling phase) 100 56
No. of new tasks
(routing phase) 130 64 0
1
2
3
4
5
6
7
8
9
10
Interest Knowledge Expertise Confidence
AverageLevel
Before
After
Only significant
difference
18. Digital Enterprise Research Institute www.deri.ie
Evaluation
Metrics
Quality (for routed tasks during routing phase)
– Response Rate: percentage of routed tasks with agree or disagree
responses
– Accuracy: percentage of routed tasks with correct responses
Cost (for assessments during profiling phase)
– Workload: number of decisions for self-rating of conceptual
knowledge or responding to test task
Hypothesis
The quality of CA strategy approaches the quality of TA
strategy during routing phase, while requiring
comparatively less assessment cost during profiling phase.
18
19. Digital Enterprise Research Institute www.deri.ie
Results: Costs
Combined assessment
Filtering assessment tasks based on highly self-rated concepts
reduces assessment cost
19
0%
20%
40%
60%
80%
100%
120%
140%
160%
RND SA TA CA CA (P+) CA (F+) CA (G+) CA (Ex)
%wokrloadcomparedtoTA
Movies Dataset Actors Dataset
For examples
filter tasks with
concepts of
Good or higher
self-rating
20. Digital Enterprise Research Institute www.deri.ie
Results: Quality of Routing
Likelihood of response and accuracy of response
remains near maximum during routing stage
20
0%
20%
40%
60%
80%
100%
RND SA TA CA CA
(P+)
CA
(F+)
CA
(G+)
CA
(Ex)
%Accuracy
Movies Dataset Actors Dataset
0%
20%
40%
60%
80%
100%
RND SA TA CA CA
(P+)
CA
(F+)
CA
(G+)
CA
(Ex)
%ResponseRate
Movies Dataset Actors Dataset
21. Digital Enterprise Research Institute www.deri.ie
Summary
Conclusion
Effective push routing depends on worker expertise
Concepts are effective for expertise profiling
Combining task-assessment with self-assessment is effective in
reducing assessment cost
Future Directions
Task routing under constraints
– Cost, Latency, Expertise, Utility
Complex workflows in data quality management
21
22. Digital Enterprise Research Institute www.deri.ie
Further Reading
U. Ul Hassan, S. O’Riain, and E. Curry, “Effects of Expertise Assessment on the
Quality of Task Routing in Human Computation,” in 2nd International Workshop on
Social Media for Crowdsourcing and Human Computation, 2013.
http://www.deri.ie/about/team/member/umair_ul_hassan/
22
2nd International Workshop on Social Media for Crowdsourcing and
Human Computation
Paris, 1 May 2013
Notas do Editor
Other sources of expertise information such as Social network, publications, etc. are part for future work.
Manually created ground truth for the tasks (test and new)