Google search results and Netflix recommendations share more in common that you might think at first look. Let’s take a deep dive into recent research and see what makes search results or recommendations good and why. We’ll talk about how users interact with search or recommendations and what are the key metrics we can measure and improve. Make no mistake, this concerns you even if you never planned to build a search or recommendation engine.
12. Accurately interpreting clickthrough
data as implicit feedback
Significant on
two-tailed tests
at a 95%
confidence level
!!!
Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, and Geri Gay. Accurately interpreting
clickthrough data as implicit feedback. In Proceedings of the 28th annual international ACM SIGIR conference on
Research and development in Information retrieval, SIGIR ’05, pages 154–161, New York, NY, USA, 2005. ACM.
13. Accurately interpreting clickthrough
data as implicit feedback
Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, and Geri
Gay. Accurately interpreting clickthrough data as implicit feedback. In
Proceedings of the 28th annual international ACM SIGIR conference on
Research and development in Information retrieval, SIGIR ’05, pages 154–161,
New York, NY, USA, 2005. ACM.
15. Evaluation Metrics
● Mean Average Precision @ N
○ probability of target result being in top N items
● Mean Reciprocal Rank
○ 1 / rank of target result
● Normalized Discounted Cumulative Gain
● Expected Reciprocal Rank
16. Optimizing search engines using
clickthrough data
Thorsten Joachims. Optimizing search engines using clickthrough data. In
Proceedings of the eighth ACM SIGKDD international conference on
Knowledge discovery and data mining, KDD ’02, pages 133–142, New York,
NY, USA, 2002. ACM.
18. Query chains: learning to rank from
implicit feedback
Filip Radlinski and Thorsten
Joachims. Query chains: learning
to rank from implicit feedback. In
KDD ’05: Proceeding of the eleventh
ACM SIGKDD international
conference on Knowledge discovery
in data mining, pages 239–248,
New York, NY, USA, 2005. ACM.
19. On Caption Bias in Interleaving
Experiments
Katja Hofmann, Fritz Behr, and Filip Radlinski: On Caption Bias in Interleaving
Experiments In Proceedings of the ACM Conference on Information and
Knowledge Management (CIKM) 2012
21. Fighting Search Engine Amnesia:
Reranking Repeated Results
In this paper, we observed that the same results are often shown to
users multiple times during search sessions. We showed that there are
a number of effects at play, which can be leveraged to improve information
retrieval performance. In particular, previously skipped results are much
less likely to be clicked, and previously clicked results may or may not
be re-clicked depending on other factors of the session.
Milad Shokouhi, Ryen W. White, Paul Bennett, and Filip Radlinski. Fighting
search engine amnesia: reranking repeated results. In Proceedings of the
36th international ACM SIGIR conference on Research and development in
information retrieval, SIGIR ’13, pages 273–282, New York, NY, USA, 2013.
ACM.