2. Key Takeaways
Fairness in Machine Learning
Need for Fairness in ML based Recommendation
Systems
Fairness-aware Machine Learning Best
Practices
Multi-sided Fairness , Platforms and Metrics
3. Why Fairness in Recommendations
Fair Housing Act LGBT
Fairness Act
Disability
Status
Disparate
Impact
Source - https://www.nytimes.com/2019/05/07/opinion/google-sundar-pichai-privacy.html
Disability
Laws and Policies
4. Ethical Artificial Intelligence – Fairness Origin
Professor Klaus Schwab - Executive Chairman of WORLD ECONOMIC
FORUM
“We must address, individually
and collectively, moral and
ethical issues raised by cutting-
edge research in artificial
intelligence and biotechnology,
which will enable significant life
extension, designer babies, and
memory extraction.” —Klaus
Schwab
Source - https://www.statista.com/chart/18805/highest-penalties-in-privacy-enforcement-actions-worldwide/
5. Foundation of Algorithmic Justice League
Joy Buolamwini - computer scientist and digital activist based at the
MIT Media Lab
Whether AI will help us reach our aspirations or
reinforce the unjust inequalities is ultimately up to us.
If we fail to make ethical and inclusive artificial
intelligence, we risk losing gains made in civil rights
and gender equity under the guise of machine
neutrality
Source - https://www.statista.com/chart/18805/highest-penalties-in-privacy-enforcement-actions-worldwide/
6. AI Regulation
Sundar Pichai
The head of Google and parent company
Alphabet has called for artificial
intelligence (AI) to be regulated
● Fair Marketplace
● Legal obligation
● Social Responsibility
● Business Requirement /Model
Source - https://builtin.com/artificial-intelligence/ai-laws-regulations
7. Unfair Recommendation Systems from Biases
Source - Simple Demographics Often Identify People Uniquely
Differences in demographics or other user characteristics.
Differences across online and offline communities, platforms
and contexts
Lexical, syntactic, semantic, and structural differences in the contents
Connections, interactions, or activities obtained from networks and their attributes
Society norms, prejudices, economical status
Seasonal, weekly or observed at a certain time
Population
Behavioral
Content
Production
Linking
Social/
Economic
Temporal
Types of Biases
8. Recommendation Systems
Differences in demographics or other user characteristics.
Differences across online and offline communities, platforms
and contexts
Lexical, syntactic, semantic, and structural differences in the contents
Connections, interactions, or activities obtained from networks and their attributes
Society norms, prejudices, economical status
Seasonal, weekly or observed at a certain time
Impact of Bias
● Biased customer reviews
● Disparate impact on minority drivers
● Unjust outcomes with low wages
9. Recommend Fairly to All Groups of Users
Fairness
Escalate
Source -https://course.ece.cmu.edu/~ece734/lectures/lecture-2018-10-08-deanonymization.pdf
Product Goals Stakeholder Identification
Analyze
Mitigate TransparencyMonitor Performance
Best Practices for Removing Bias
11. Where and How
Social Science
Backgrounds
Gender and sexual
orientation
Diverse Identities - Race, Nationality, Religion
Google’s Responsible AI - Diversity and Inclusion
12. Fairness for Individual and Groups
Pre-processing , Learning Function , Post Processing
Source - https://arxiv.org/pdf/1906.08732.pdf
FAIRNESS COMMON TERMS
Source
: https://arxiv.org/pdf/1906.08732.pdf
Envy-freeness requires
that every user should
prefer their own
allocation to that of
everyone else; it ignores
users’ qualifications and
considers preferences
Individual or metric
fairness ignores
preferences and
requires that similar
users should be
treated similarly.
Multiple task
fairness, requires
that individual
fairness is satisfied
separately and
simultaneously for
all categories
Inter-category envy-
freeness, which allows
users to specify a set of
categories that they
“care” about, and
guarantees that they
receive at least that they
care about as any other
individual
13. Fairness Constraints with 2 latent variables:
ProtectedItemRating(i) ⇒ UnProtectedItemRating(i)
UnProtectedItemRating(i) ⇒ ProtectedItemRating(i)
Protected(u) ∧ RATING(u, i) ∧ ItemGroup(i, g)
⇒ ProtectedItemGroupRating(g)
¬ Protected(u) ∧ Rating(u, i) ∧ ItemGroup(i, g)
⇒ UnProtectedItemGroupRating(g)
Recommendation Systems
MetricsSL
No
1.
2.
3.
4.
Statistical independence between recommendation
results and the sensitive attribute
Value unfairness (estimation error across user groups)
Absolute unfairness
Underestimation unfairness
5. Overestimation unfairness
Fairness Metrics between advantaged and disadvantaged groups
6. Non-parity Fairness
Source -https://arxiv.org/pdf/1809.09030.pdf
𝑝𝑖- d-dimensional vector representing the ith user,
𝑞 𝑗 − d−dimensional vector representing the jth item,
𝑢𝑖 𝑎𝑛𝑑 𝑣𝑗. user and item respectively
X – observed ratings
14. Recommendation Diversity and User Fairness
Ranking algorithms
Top-l CF
recommendations
Sample K items
uniformly – K nearest
neighbors
Greedy algo – New
recommendations above
threshold
● Individual diversity - Diverse recommendations
to the users.
● Aggregate diversity - Improve item diversity by
recommending them at least once across all
users.
● Limitations – No fairness like differential
treatment of two users or two items.
● Impact – Disparity among users increases with
Aggregate Diversity
CF Output
Source - https://www.researchgate.net/publication/324640535_User_Fairness_in_Recommender_Systems
15. Different Types of multi-sided Fairness
● Multi-stakeholder Recommender Systems with multiple
goals (e.g. LinkedIn, Etsy)
● Reciprocal Recommendations –Bilateral and acceptable
to both parties. (e.g. job, mentor, business partner)
○ Peer-to-peer recommendation –Sharing economy,
online advertising and scientific collaboration
● Objective – Maximize system utility
Multi-sided Fairness in Recommendation Systems
Source - http://proceedings.mlr.press/v81/burke18a/burke18a.pdf
16. Regularization based Sparse Linear Method (SLIM) with Neighborhood balance
● Provider Fairness – Market Diversity , avoid
monopoly by recommending minority owned
businesses.
● Consumer Fairness – Personalization, Disparate
impact of recommendation on protected
classes
Multi-sided Fairness – User Based Neighborhoods
Source -http://proceedings.mlr.press/v81/burke18a/burke18a.pdf
Unbalanced Neighbors
Balanced Neighbors
FairnessFairness
● Regression coefficient <user, item> pair
● Minimize regularized loss function
17. Multi-sided Fairness - Item Based Neighborhoods
Source -https://medium.com/@cfpinela/recommender-systems-user-based-and-item-based-collaborative-filtering-
5d5f375a127f
● Use case - Exposure to loans
from different geographic
regions
● Items in protected group are
in neighborhoods that have
balanced membership of
items from the unprotected
group● Balance between protected and non-
protected neighbors for each user.
Balanced Neighborhood SLIM – Balance Personalization with Fairness
18. Two-sided Fairness Two-sided Platforms
● Fair recommendation -> Fair Allocation
○ Maximin Share (MMS) of exposure for Producers
and Envy-Free up to One Good (EF1) fairness for
every customer
● Cardinality constrained fair allocation
○ All items are grouped into disjoint categories and no
agent receives more than a pre-specified number of
items from same category
○ Exactly k items are allocated to each customer
Datasets – GL-CUSTOM, (Relevance
Scoring) , GL-FACT and LAST.FM DATASET
–(Latent Factorization)
Source - https://people.mpi-sws.org/~achakrab/papers/patro_FairRec_WWW20.pdf
19. Source - https://people.mpi-sws.org/~achakrab/papers/patro_FairRec_WWW20.pdf
FAIR RECOMMENDATION SYSTEMSAdvantages and Metrics
MetricsSL
No
1.
2.
3.
4.
Fraction of Satisfied Producers
Inequality of Product Exposures
Exposure Loss on Producers
Mean-Average Envy
5. Loss and Disparity in Customer Utilities
● Advantages
○ Economic
○ Social
○ Judicial and
○ Long-term sustainability
K items
20. Evaluate Fair Recommender Systems - Pairwise
Comparisons Original vs Pairwise Regularization
● Pairwise regularization to optimize for
inter-group pairwise fairness.
● Corresponds to ranking performance
● Pairwise Fairness - Likelihood of a clicked
item being ranked above another relevant
unclicked item is the same across both
groups (same/opposite), conditioned on
the items have safe level of engagement.
● Real world scalable product grade systems
Source - http://alexbeutel.com/papers/kdd2019_pairwise_fairness.pdf
Intra Group and Inter Group Pairwise Fairness
22. Conclusion
● Handle sparse data
● Bias from Advertisers
● Formulate right personalization and key
outcome
● Techniques to mitigate harms and mis-
behaviors
● Computational enhancement for Multi-
stakeholder fairness optimization
● Network structures that define relationships
between providers and users
● Sequential notions of fairness into recommender
systems with additional time-dependent
constraints
● Equal treatment vs equal outcome
Source: https://www.researchgate.net/publication/314971193_Optimal_Performance_vs_Fairness_Tradeoff_for_Resource_Allocation_in_Wireless_Systems