1. The study examined how providing users with inspectability and control over recommendations in a social recommender system impacts user experience.
2. The results showed that giving users inspectability through a full graph interface increased understandability and perceived control compared to a list interface. It also improved users' recognition of known recommendations.
3. Allowing users to control recommendations at the item level led to higher novelty through fewer known recommendations, while control at the friend level increased accuracy.
4. Overall, the findings suggest that social recommenders should provide users with inspectability and control through a simple interface to improve the user experience.
14. Critiquing-based recommenders button “Self 133
specify criteria
for ‘Better
Features’.
142 L. Chen, P. Pu
130 L. Chen, P. Pu
(a) The preference-based organization interface.
The product being
critiqued
System-suggested
compound critiques
Step 4
User-initiated
critiquing facility
Fig. 5 The Dynamic Critiquing interface with system suggested compound critiques for users to select
(McCarthy et al. 2005c)
(b) The user-initiated example critiquing interface. Fig. 4 System showing a new set of alternatives after the user’s critiques
Fig. 10 Hybrid critiquing system (version 2): the combination of preference-based organization interface
MORE CONTROL & INSPECTABILITY?
(Pref-ORG) and user-initiated MAUT-based compound critiques More specifically, after getting users’ 2006)system-suggestedthe conversational dia-
2.2.2 critiquing facility (Chen and Pu 2007b, 2010)
and visual critiquing (Zhang and Pu initial
Fig. 9 Hybrid critiquing system (version 1): the combination of preferences via compound critiques and
user-initiated critiquing facility (Chen and Pu 2007a) a SQL query and passes it to the database. If too
log, the system translates them into
However, the Dynamic-Critiquing methodmatching goods exist, the is still limited, Asking function would calculate the
many (including its extension) Navigation by
MORE COMPLEXITY!
in that it only reveals what the system can provide, butpossible questions and then ask appropriate questions to the shop-
information gain of does not take into account
users’ interest in the suggested critiques.forGiven this limitation, Zhang and goods.
the matching Pu (2006)
organization algorithm (as describedcan Sect. the purposenarrowing isthe of the domain andAfter merchandise records are narrowed
in not only per If the adapting interested in compound
have proposed an approach with 2.2). user downgeneration of one of easily perform critiquing via the
obtain knowledge
down toof pre-defined threshold number, the Navigation by Proposing function will
a
the suggested critiques, she could click “Show All” tomodelmoreuser’s preferences based on first sample goodcritiques on their
suggested critiques, see each products under the freely compose is the good record
critiques to user preferences. Formally, they but also have the opportunity The
show three significantly different samples. to
critique. Among these products, theUtility witheither choose one astheoryfinalmatching goods. Its selling points directly reflect the
the multi-Attribute user can theclosest towhich is a point oftaking into account
own Theory (MAUT), the center her all choice, or
self-initiated critiquing support.
of conflicting value preferences and customer’s a sore for each item to represent its the record positioned farthest away
producing request. The second sample good is
15. THE POWER OF VISUALIZATION
SIMPLE SIMPLE
CONTROL INSPECTABILITY
18. SYSTEM
Modified TasteWeights system
Facebook friends as recommenders
Music recommendations (based on “likes”)
Split up control + inspectability
19. PARTICIPANTS
267 participants
Mechanical Turk + Craigslist
At least 5 music “likes” and overlap with at
least 5 friends at least 10 recommendations
lists limited to 10 to avoid cognitive overload
Demographics similar to Facebook user
population
20. PROCEDURE
STEP 1: Log in to Facebook
System collects your music “likes”
System collects your friends’ music likes
21. PROCEDURE
STEP 2: Control
3 conditions, between subjects
<skip> VS VS
NOTHING WEIGH ITEMS WEIGH FRIENDS
22. PROCEDURE
STEP 3: Inspection
2 conditions, between subjects
VS
LIST ONLY FULL GRAPH
24. PROCEDURE
STEP 4: Evaluation
For each recommendation:
Do you know this band/artist?
How do you rate this band/artist?
(link to LastFM page for reference)
25. PROCEDURE
STEP 5: Questionnaires
- understandability
- perceived control
- perceived recommendation quality
- system satisfaction
- music expertise
- trusting propensity
- familiarity with recommender systems
27. SUBJECTIVE
3 items:
- The recommendation
process is clear to me
- I understand how
TasteWeights came up with
the recommendations
- I am unsure how the
recommendations were
generated*
28. SUBJECTIVE
INSPECTABILITY
full graph
list only
CONTROL
29. SUBJECTIVE
full graph
list only
4 items:
- I had limited control over
the way TasteWeights
made recommen-dations*
- TasteWeights restricted me
in my choice of music*
- Compared to how I
normally get
recommendations,
TasteWeights was very
limited*
- I would like to have more
control over the
recommendations*
30. SUBJECTIVE
full graph
list only
6 items:
- I liked the artists/bands
recommended by the
TasteWeights system
- The recommended artists/
bands fitted my preference
- The recommended artists/
bands were well chosen
- The recommended artists/
bands were relevant
- TasteWeights recommen-
ded too many bad artists/
bands*
- I didn't like any of the
recommended artists/
bands*
31. SUBJECTIVE
full graph
list only 7 items:
- I would recommend
TasteWeights to others
- TasteWeights is useless*
- TasteWeights makes me
more aware of my choice
options
- I can make better music
choices with TasteWeights
- I can find better music
using TasteWeights
- Using TasteWeights is a
pleasant experience
- TasteWeights has no real
benefit for me*
32. BEHAVIOR
full graph
list only
Time (min:sec) taken
in the inspection
phase (step 3)
- Including LastFM
visits
- Not including the
control phase
(step 2)
- Not including the
evaluation phase
(step 4)
33. BEHAVIOR
full graph
list only
Number of artists the
participant claims
she already knows
Why higher in the
full graph condition?
- Link to friends reminds
the user how she knows
the artist
- Social conformance
34. BEHAVIOR
full graph
list only
Average rating of the
10 recommendations
- Lower when rating
items than when
rating friends
- Slightly higher in
full graph condition
36. STRUCTURAL MODEL
Objective System Aspects Subjective System Aspects (SSA) User Experience (EXP)
(OSA)
+ Perceived
!2(2) = 10.70**
+ Understandability control
item: 0.428 (0.207)* (R2 = .153) 0.377 (R2 = .311)
friend: 0.668 (0.206)**
(0.074)***
0.955
Control (0.148)***
item/friend vs. no control 0.459 + 0.770 + +
(0.148)** (0.094)***
Perceived
Satisfaction
recommendation
with the system
quality 0.410 + (R2 = .696)
(R2 = .512) (0.092)***
Inspectability
full graph vs. list only
37. STRUCTURAL MODEL
Objective System Aspects Subjective System Aspects (SSA) User Experience (EXP)
(OSA)
+ Perceived
!2(2) = 10.70**
+ Understandability control
item: 0.428 (0.207)* (R2 = .153) 0.377 (R2 = .311)
friend: 0.668 (0.206)**
(0.074)***
0.955
Control + + (0.148)***
item/friend vs. no control 0.459 + 0.770 + +
(0.148)** (0.094)***
!2(2) = 10.81** 0.231 0.249 Perceived
Satisfaction
(0.097)1 (0.114)* (0.049)*** recommendation
item: −0.181 with the system
friend: −0.389 (0.125)** quality 0.410 + (R2 = .696)
(R2 = .512) (0.092)***
+ −
0.148
Inspectability (0.051)** −0.152 (0.063)*
full graph vs. list only − Interaction (INT)
0.288 Inspection 0.323
(0.091)** + time (min) (0.031)***
(R2 = .092) +
+ number of known +
Average rating
recommendations (R2 = .508)
0.695 (0.304)* (R2 = .044)
0.067
(0.022)**
38. STRUCTURAL MODEL
Personal Characteristics (PC)
Familiarity with Music Trusting
recommenders expertise propensity
0.166 (0.077)* −0.332 (0.088)***
Objective System Aspects + Subjective System Aspects (SSA) − User Experience (EXP)
(OSA)
+ Perceived 0.375
!2(2) = 10.70** (0.094)*** 0.205 0.257
+ Understandability control
item: 0.428 (0.207)* 2
0.377
friend: 0.668 (0.206)**
(R = .153) (R2 = .311) (0.100)* (0.124)*
(0.074)***
0.955
Control + + (0.148)***
item/friend vs. no control 0.459 + 0.770 + + + + +
(0.148)** (0.094)***
!2(2) = 10.81** 0.231 0.249 Perceived
Satisfaction
(0.097)1 (0.114)* (0.049)*** recommendation
item: −0.181 with the system
friend: −0.389 (0.125)** quality 0.410 + (R2 = .696)
(R2 = .512) (0.092)***
+ −
0.148
Inspectability (0.051)** −0.152 (0.063)*
full graph vs. list only − Interaction (INT)
0.288 Inspection 0.323
(0.091)** + time (min) (0.031)***
(R2 = .092) +
+ number of known +
Average rating
recommendations (R2 = .508)
0.695 (0.304)* (R2 = .044)
0.067
(0.022)**
40. CONCLUSION
Social recommenders
- Give users inspectability and control
- Can be done with a simple user interface!
Inspectability:
- Graph increases understandability and perceived control
- Improves recognition of known recommendations
Control:
- Items control: higher novelty (fewer known recs)
- Friend control: higher accuracy
45. CONCLUSION
Social recommenders
- Give users inspectability and control
- Can be done with a simple user interface!
Inspectability:
- Increases understandability and perceived control
- Improves recognition of known recommendations
Control:
- Friend control: higher accuracy
- Items control: higher novelty (fewer known recs)