Unleash Your Potential - Namagunga Girls Coding Club
Social Book Search: A Combination of Personalized Recommendations and Retrieval
1. Social Book Search:
A Combination of Personalized
Recommendations and Retrieval
Author:
Justin van Wees
Supervisor: Master Thesis
Marijn Koolen Information Science
Second assessor: Human Centered Multimedia
Frank Nack August 23, 2012
2. Outline
1. Background
2. Research questions
3. Data collection
4. Experiments and results
5. Conclusions
6. Discussion and future work
7. Questions
3. Current situation
• Traditional information retrieval (IR) models:
• developed for use on small collections
• contain only officially published documents, annotated
by professionals
• Many modern web (2.0) applications still use traditional
models for search
• Millions of documents
• Combination of user-generated content (UDG) and
professional metadata
4. Current situation
• User uses IR system to find those documents that are
topically relevant to her information need
• Queries can lead to thousands of relevant documents
• Evaluating large number of results expensive for user
• Other notions of relevance, i.e. how well-written,
popular, recent, fun is the document
• Combination of professional and user-generated
metadata
5. Social Book Search Track
• Evaluate relative value of controlled book metadata
versus social metadata (Koolen et al., 2012)
• Amazon.com and LibraryThing (LT) corpus
• ~2.8 million book records, both social and professional
metadata
• book search requests from LT discussion forums as
topics, suggestions by other users as relevance
judgements
6.
7. Recommender Systems
• Recommender Systems (RSs) suggest items of interest
to individuals or groups of users (Resnick and Varian, 1997)
• Assumes that individual’s taste or interest in a particular
item can be explained by features recorded by the RS
(demographics, previous interactions, etcetera)
• Different strategies: collaborative filtering (CF),
content-, community-, knowledge-based, hybrid (Burke,
2007)
• Differs from traditional retrieval in terms of query
formulation, source of relevance feedback and
personalization (Furner, 2002)
8. Research Questions
Does a combination of techniques from the field of IR with
those from RSs improve retrieval performance when searching
for works in a large scale on-line collaborative media
catalogue?
• What data are we able to collect?
• Can we automatically make accurate predictions of a
user’s preference for an unknown book?
• How do we combine results from IR system with RSs?
• Social Book Search scenario and data
9. Crawling LibraryThing
• Perform four different crawls of user profiles and
personal catalogues
• For each crawl, also crawl links to other profiles
• Compare crawls to determine representativeness for
entire LT user-base
• All crawl combined approximately 6% of LT userbase
Crawl seed list profiles unique works profile overlap
Forum users 1,104 60,131 4,354,387
Random – 211 works 1,306 8,040 2,537,065 7,048
Random – 1,000 works 5,577 18,381 3,580,296 14,262
Random – 10,000 works 35,671 64,379 5,122,848 37,300
Total - 89,693 5,299,399 -
10. Crawling LibraryThing
Crawl min. max. median mean std. dev.
Forum users
Friends 0 172 3.0 8.47 16.31
Groups 0 10 9.0 6.79 3.74
Interesting Libraries 0 510 2.0 11.19 26.46
Random – 211 works
Friends 0 79 0.0 2.61 7.46
Groups 0 10 0.0 1.70 3.05
Interesting Libraries 0 394 0.0 3.30 17.80
Random – 1,000 works
Friends 0 84 0.0 2.18 6.07
Groups 0 10 0.0 1.64 3.02
Interesting Libraries 0 574 0.0 2.74 14.41
Random – 10,000 works
Friends 0 2,858 0.0 1.73 17.49
Groups 0 10 0.0 1.24 2.61
Interesting Libraries 0 855 0.0 1.69 10,40
Total
Friends 0 2,858 1.0 2.14 12.77
Groups 0 10 0.0 1.18 2.44
Interesting Libraries 0 855 0.0 1.27 8.00
11. Crawling LibraryThing
Crawl min. max. median mean std. dev. sum
Forum users
Unrated 0 28,402 84.0 397.22 929.70 23,885,23
Rated 0 12,190 3.0 78.80 238.53 4,738,018
Total 0 28,402 148.00 476.02 980.88 28,623,249
Random – 211 works
Unrated 0 28,402 458.00 1,112.81 1,835.81 8,946,997
Rated 0 12,190 10.00 182.77 472.08 1,469,531
Total 0 28,402 657.00 1,295.58 1,908.65 10,416,528
Random – 1,000 works
Unrated 0 28,402 331.00 864.32 1,480.98 15,887,025
Rated 0 12,190 3.00 130.20 369.06 2,393,233
Total 0 28,402 475.00 994.52 1539.15 18,280,258
Random – 10,000 works
Unrated 0 28,402 163.00 486.63 955.86 31,328,971
Rated 0 12,190 1.00 74.04 237.01 4,766,750
Total 0 28,402 201.00 560.68 1,000.50 36,095,721
Total
Unrated 0 28,402 102.00 378.18 834.94 33,920,353
Rated 0 12,190 1.00 62.85 206.40 5,637,097
Total 0 28,402 156.00 441.03 876.76 39,557,450
12. Generating Recommendations
• Collaborative filtering approach
• Unary and rated transactions
• Memory- and model-based recommenders
• Randomly split transactions (80% train/20% test) for
performance evaluation
13. Generating Recommendations
• Neighbourhood (Desrosiers and Karypis, 2011):
• Directly use user-item ratings to predict ratings for
‘unseen’ items
• Find n most similar neighbours (Pearson correlation)
• Use the weighted average rating given by the user’s
neighbours
• Let neighbours ‘vote’ on unary transactions
14. Generating Recommendations
• Singular Value Decomposition (SVD) (Schafer et al., 2007):
• Reduce domain complexity by mapping item space to
k dimensions
• Remaining dimensions represent the latent topics:
preferences classes of users, categorical classes of
items
• Currently considered ‘state of the art’
16. Retrieving Works
• Setup used for INEX 2012; top performing run
• Index consists of user-generated content
• Removed stopwords
• Stemming with Krovetz
• Topic titles as queries
• Language model
• Pseudo relevance feedback, 50 terms of top 10 results
17. Combining IR and RS
• Retrieval system: ranked list, probability score between
0 and 1 per work
• Recommendations: estimated preference of user for
work between 0.5 and 5.0 or 0 or 1 (unary)
• Normalise ratings
• ‘Boost’ works with estimated preference, CombSUM
(Fox and Shaw, 19994)
• Use average rating when no prediction can be made
• Introduce weight (λ) between systems
19. Conclusions
• Collected representative sample of user profiles
• Collaborative filtering obvious choice
• SVD best at estimating rated preference
• Poor performance on unary transactions
• Successfully combined retrieval with personalized
recommendations
• Rated transactions most useful
• Personal preference is relevance evidence that can
highly improve retrieval performance in SBS
20. Discussion and Future Work
• Popularity as relevance evidence
• Value of λ depending on IR score distribution
• Other (mixtures of) RS setups
• Scaling, cold-start problems
• Trust and transparency of the system
22. References
• R. Burke. Hybrid web recommender systems. In The adaptive web, pages 377–408. Springer-Verlag, 2007.
• C. Desrosiers and G. Karypis. A comprehensive survey of neighborhood-based recommendation methods. Recommender
Systems Handbook, pages 107–144, 2011.
• E. Fox and J. Shaw. Combination of multiple searches. NIST SPECIAL PUBLICATION SP, pages 243–243, 1994.
• J. Furner. On recommending. Journal of the American Society for Information Science and Technology, 53(9):747–763, 2002.
• M. Koolen, G. Kazai, J. Kamps, A. Doucet, and M. Landoni. Overview of the INEX 2011 books and social search track. In S. Geva, J.
Kamps, and R. Schenkel, editors, Focused Retrieval of Content and Structure: 10th International Workshop of the Initiative for the
Evaluation of XML Retrieval (INEX 2011), volume 7424 of LNCS. Springer, 2012.
• P. Resnick and H.Varian. Recommender systems. Communi- cations of the ACM, 40(3):56–58, 1997.
• J. B. Schafer, D. Frankowski, J. Herlocker, and S. Sen. Collaborative Filtering Recommender Systems. Inter- national Journal of Electronic
Business, 2(1):77, 2007. ISSN 14706067. doi: 10.1504/IJEB.2004.004560. URL http://www.springerlink.com/index/
t87386742n752843.pdf.
23. Number of Books in Catalogue
(a) Unrated works (b) Rated works
24. Document scoring
S(d) = (1 )PRet (d|q) + PCF (d)
• PRet (d|q): work’s score obtained through IR system
• PCF : estimated rating of current user for work obtained through RS
• : weight between systems
25. Estimating preference (rated)
P
wuv rvi
v2Ni (u)
rui =
ˆ P
|wuv |
v2Ni (u)
• rui : estimated preference of user u for item i
ˆ
• wuv : preference similarity between users v and u
• Ni (u): k-NN of u that rated item i
Desrosiers and Karypis, 2011