O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a navegar o site, você aceita o uso de cookies. Leia nosso Contrato do Usuário e nossa Política de Privacidade.

O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a utilizar o site, você aceita o uso de cookies. Leia nossa Política de Privacidade e nosso Contrato do Usuário para obter mais detalhes.

O slideshow foi denunciado.

Gostou da apresentação? Compartilhe-a!

- PyMCがあれば，ベイズ推定でもう泣いたりなんかしない by Toshihiro Kamishima 31676 views
- Pythonによる機械学習実験の管理 by Toshihiro Kamishima 14511 views
- 科学技術計算関連Pythonパッケージの概要 by Toshihiro Kamishima 8990 views
- Personalized Pricing Recommender Sy... by Toshihiro Kamishima 1979 views
- OpenOpt の線形計画で圧縮センシング by Toshihiro Kamishima 19296 views
- RecSys2016勉強会 資料 by Toshihiro Kamishima 262 views

1.479 visualizações

Publicada em

IEEE International Workshop on Privacy Aspects of Data Mining (PADM), in conjunction with ICDM2016

Article @ Official Site: http://doi.ieeecomputersociety.org/10.1109/ICDMW.2016.0127

Workshop Homepage: http://pddm16.eurecat.org/

Abstract:

This paper studies a new approach to enhance recommendation independence. Such approaches are useful in ensuring adherence to laws and regulations, fair treatment of content providers, and exclusion of unwanted information. For example, recommendations that match an employer with a job applicant should not be based on socially sensitive information, such as gender or race, from the perspective of social fairness. An algorithm that could exclude the influence of such sensitive information would be useful in this case. We previously gave a formal definition of recommendation independence and proposed a method adopting a regularizer that imposes such an independence constraint. As no other options than this regularization approach have been put forward, we here propose a new model-based approach, which is based on a generative model that satisfies the constraint of recommendation independence. We apply this approach to a latent class model and empirically show that the model-based approach can enhance recommendation independence. Recommendation algorithms based on generative models, such as topic models, are important, because they have a flexible functionality that enables them to incorporate a wide variety of information types. Our new model-based approach will broaden the applications of independence-enhanced recommendation by integrating the functionality of generative models.

Publicada em:
Dados e análise

Sem downloads

Visualizações totais

1.479

No SlideShare

0

A partir de incorporações

0

Número de incorporações

717

Compartilhamentos

0

Downloads

2

Comentários

0

Gostaram

1

Nenhuma incorporação

Nenhuma nota no slide

- 1. Model-based Approaches for Independence-Enhanced Recommendation Toshihiro Kamishima*, Shotaro Akaho*, Hideki Asoh*, and Issei Sato** http://www.kamishima.net/ *National Institute of Advanced Industrial Science and Technology (AIST), Japan **The University of Tokyo, Japan The 1st Int’l Workshop on Privacy and Discrimination in Data Mining in conjunction with ICDM2016 @ Barcelona, Spain, Dec. 12, 2016 1START
- 2. Independence-Enhanced Recommender Systems 2 Providing independent information is useful in recommendation Adherence to laws and regulations Fair treatment of content providers Exclusion of unwanted information Independence-enhanced Recommender System The absolutely independent recommendation is intrinsically infeasible, because recommendation always depends on the preference of a speciﬁc user ↓ This system makes recommendation so as to enhance independence with respect to a speciﬁc sensitive feature
- 3. Contributions 3 Our Previous Work We advocated a concept of independence-enhanced recommendation We developed a regularization approach to enhance recommendation independence This approach is applied to a probabilistic matrix factorization (PMF) model This Talk We propose another approach to enhance recommendation independence, a model-based approach A sensitive feature is embedded into a graphical model for recommendation, while maintaining independence between recommendation and sensitive information
- 4. Outline 4 A concept of recommendation independence Applications of recommendation independence Two approaches to enhance recommendation Independence A regularization approach a regularizer to constrain independence is introduced to a probabilistic matrix factorization model A Model-based approach a sensitive feature is embedded into a latent class model, while maintaining independence between a recommendation result and a sensitive value Experiments Related work Conclusions
- 5. Outline 5 A concept of recommendation independence Applications of recommendation independence Two approaches to enhance recommendation Independence A regularization approach a regularizer to constrain independence is introduced to a probabilistic matrix factorization model A Model-based approach a sensitive feature is embedded into a latent class model, while maintaining independence between a recommendation result and a sensitive value Experiments Related work Conclusions
- 6. Sensitive Feature 6 S : sensitive feature It is speciﬁed by a user depending on his/her purpose A recommendation result is independent from this sensitive feature Sensitive value is determined depending on a user and an item As in a case of standard recommendation, we use random variables X: a user, Y: an item, and R: a rating A sensitive feature is restricted to a binary type We adopt an additional variable for recommendation independence Ex. Sensitive feature = movie’s popularity / user’s gender
- 7. Recommendation Independence 7 No information about a sensitive feature inﬂuences the result The status of the sensitive feature is explicitly excluded from the inference of the recommendation result Recommendation Independence the statistical independence between a recommendation result, R, and a sensitive feature, S Ratings of items are predicted under this constraint of recommendation independence [Kamishima 12, Kamishima 13] Pr[R | S] = Pr[R] R ⫫ S ≡
- 8. dislike like dislike like Eﬀect of Independence Enhancement 8 Standard recommendation Independence-enhanced r. two distributions are largely diverged distributions become close by enhancing independence The bias that older movies were rated higher could be successfully canceled by enhancing independence ✽ each bin of histograms of predicted ratings for older and newer movies
- 9. Outline 9 A concept of recommendation independence Applications of recommendation independence Two approaches to enhance recommendation Independence A regularization approach a regularizer to constrain independence is introduced to a probabilistic matrix factorization model A Model-based approach a sensitive feature is embedded into a latent class model, while maintaining independence between a recommendation result and a sensitive value Experiments Related work Conclusions
- 10. Application Adherence to Laws and Regulations 10 A recommendation service must be managed while adhering to laws and regulations suspicious placement keyword-matching advertisement Advertisements indicating arrest records were more frequently displayed for names that are more popular among individuals of African descent than those of European descent sensitive feature = users’ demographic information Legally or socially sensitive information can be excluded from the inference process of recommendation Socially discriminative treatments must be avoided [Sweeney 13]
- 11. Application Fair Treatment of Content Providers 11 System managers should fairly treat their content providers The US FTC has investigated Google to determine whether the search engine ranks its own services higher than those of competitors Fair treatment in search engines Fair treatment in recommendation Marketplace sites should not abuse their position to recommend their own items more frequently than tenants' items sensitive feature = a content provider of a candidate item Information about who provides a candidate item can be ignored, and providers are treated fairly [Bloomberg]
- 12. Application Exclusion of Unwanted Information 12 Filter Bubble: To ﬁt for Pariser’s preference, conservative people are eliminated from his friend recommendation list in FaceBook sensitive feature = a political conviction of a friend candidate Information about whether a candidate is conservative or progressive can be ignored in a recommendation process Information unwanted by a user is excluded from recommendation [TED Talk by Eli Pariser, http://www.ﬁlterbubble.com/]
- 13. Outline 13 A concept of recommendation independence Applications of recommendation independence Two approaches to enhance recommendation Independence A regularization approach a regularizer to constrain independence is introduced to a probabilistic matrix factorization model A Model-based approach a sensitive feature is embedded into a latent class model, while maintaining independence between a recommendation result and a sensitive value Experiments Related work Conclusions
- 14. Formalizing Task 14 Predicting Ratings: a task to predict a rating value that a user would provide to an item Dataset Random variables: user X, item Y, rating R, sensitive feature S Prediction Function Dataset Prediction Function Standard Recommendation Independence-Enhanced Rec. D = {(xi, yi, ri)} D = {(xi, yi, ri, si)} Çr(x, y) Çr(x, y, s)
- 15. Çr(x, y) = 𝜇 + bx + cy + pxqÒ y Probabilistic Matrix Factorization 15 Probabilistic Matrix Factorization Model predict a preference rating of an item y rated by a user x well-performed and widely used [Salakhutdinov 08, Koren 08] For a given training dataset, model parameters are learned by minimizing the squared loss function with an L2 regularizer. cross eﬀect of users and itemsglobal bias user-dependent bias item-dependent bias ≥ D (ri * Çr(xi, yi))2 + 𝜆 Ò⇥Ò2 Prediction Function Objective Function L2 regularizer regularization parameter squared loss function
- 16. Independence-Enhaned PMF 16 a prediction function is selected according to a sensitive value sensitive feature Çr(x, y, s) = 𝜇(s) + b(s) x + c(s) y + p(s) x q(s) y Ò Prediction Function Objective Function ≥ D (ri * Çr(xi, yi))2 * ⌘ indep(R, S) + 𝜆 Ò⇥Ò2 independence parameter: control the balance between the independence and accuracy independence term: a regularizer to constrain independence The larger value indicates that ratings and sensitive values are more independent Matching means of predicted ratings for two sensitive values
- 17. Prediction: Latent Class Model 17 [Hofmann 99] z y x r Latent Class Model: A probabilistic model for collaborative ﬁltering A basic topic model, pLSA extended so as to be able to deal with ratings r given by users x to items y Çr(x, y) = EPr[rx,y][level(r)] = ≥ r Pr[rx, y] level(r) the r-th rating value A rating value can be predicted by the expectation of ratings Model parameters can be learned by an EM algorithm latent topic variable
- 18. Independence-Enhanced LCM 18 z y x r s z y x r s Independence-Enhancement by a Model-based Approach A sensitive variable is embedded into the original LCM A rating and a sensitive variable are mutually independent A user, an item, and a rating are conditionally independent given Z A type 2 model can more strictly enhance recommendation independence, because in addition to X and Y, Z depends on a sensitive variable Type 1 model Type 2 model
- 19. Outline 19 A concept of recommendation independence Applications of recommendation independence Two approaches to enhance recommendation Independence A regularization approach a regularizer to constrain independence is introduced to a probabilistic matrix factorization model A Model-based approach a sensitive feature is embedded into a latent class model, while maintaining independence between a recommendation result and a sensitive value Experiments Related work Conclusions
- 20. Experimental Conditions 20 Data Sets ML1M-Year: Movie preference data. A sensitive feature is whether movies’ release year is old or new. ML1M-Gender: Movie preference data. A sensitive feature is a gender of a user. Flixster: Movie preference data. A sensitive feature is whether a movie is popular or not. Sushi (not presented here) Evaluation measures MAE (Mean Absolute Error) Precision measure. The smaller is the better. KS (Statistic of the two-sample Kolmogorov-Smirnov test) Independence measure. The smaller is the better. The area between two empirical cumulative distributions of predicted ratings for S = 0 and S = 1.
- 21. Experimental Results 21 Independence indexes were improved compared to their original method, but their precisions were slightly sacriﬁced Contrary to our expectation, no clear diﬀerences between Type 1 and Type 2 models Independence seemed to be less strictly enhanced by a model based approach ML1M-Year ML1M-Gender Flixster MAE KS MAE KS MAE KS PMF 0.685 0.1687 0.685 0.0389 0.655 0.1523 PMF-r 0.697 0.0271 0.694 0.0050 0.653 0.0165 LCM 0.729 0.1984 0.729 0.0487 0.671 0.1787 LCM-mb1 0.717 0.0752 0.719 0.0243 0.672 0.0656 LCM-mb2 0.720 0.1030 0.720 0.0364 0.672 0.0656 PMF: original PMF, PMF-r: independence-enhanced PMF, LCM: original LCM, LCM-mb1: Type1 LCM, LCM-mb2: Type 2 LCM
- 22. Outline 22 A concept of recommendation independence Applications of recommendation independence Two approaches to enhance recommendation Independence A regularization approach a regularizer to constrain independence is introduced to a probabilistic matrix factorization model A Model-based approach a sensitive feature is embedded into a latent class model, while maintaining independence between a recommendation result and a sensitive value Experiments Related work Conclusions
- 23. Recommendation Diversity 23 [Ziegler+ 05, Zhang+ 08, Latha+ 09, Adomavicius+ 12] Recommendation Diversity Similar items are not recommended in a single list, to a single user, to all users, or in a temporally successive lists recommendation list similar items excluded Diversity Items that are similar in a speciﬁed metric are excluded from recommendation results The mutual relations among results Independence Information about a sensitive feature is excluded from recommendation results The relations between results and sensitive values
- 24. Independence vs Diversity 24 short-head long-tail short-head long-tail standard recommendation diversiﬁed recommendation Because a set of recommendations are diversiﬁed by abandoning short-head items, predicted ratings are still biased Prediction ratings themselves are unbiased by enhancing recommendation independence
- 25. Privacy-preserving Data Mining 25 recommendation results, R, and sensitive features, S, are statistically independent In a context of privacy-preservation Even if the information about R is disclosed, the information about S will not exposed mutual information between a recommendation result, R, and a sensitive feature, S, is zero I(R; S) = 0 In particular, a notion of the t-closeness has strong connection
- 26. Conclusions 26 Contributions We proposed a new model-based approach to enhance recommendation independence This approach was implemented into a latent class model Experimental results showed the successful enhancement of recommendation independence by this approach Future work Developing a regularization approach for a latent class model, and comparing the performance with a model-based approach Bayesian extension Acknowledgment We would like to thank for providing datasets for the Grouplens research lab and Dr. Mohsen Jamali. This work is supported by MEXT/JSPS KAKENHI Grant Number JP24500194 and JP15K00327, and JP16H02864.

Nenhum painel de recortes público que contém este slide

Parece que você já adicionou este slide ao painel

Criar painel de recortes

Seja o primeiro a comentar