SlideShare uma empresa Scribd logo
1 de 53
Explainable AI
Wagston Staehler & Luciano Alves
March, 2020
The Presenters
Wagston Staehler
Computer Engineer
MSc Computer Science
Researcher at HP
Luciano Alves
BSc Computer Science
MSc Computer Science
ML Engineer at HP
Machine Learning
Dataset  Train Model
Fine tuning
Optimization
…
 Accuracy
 Precision/Recall
 Error
 Deploy!
 Real-world!
 Data has meaning!
Interpretability and Explainability
• Interpretability: to establish a cause and effect relationship
• Explainability: to explain the internal mechanics of ML or DL in
human terms
• Most of the time, the terms are interchangeable
• Interpretability, explainability, explainable AI, XAI
Agenda
1. Awkward real-world situations…
2. Why interpretability?
3. Some techniques and examples
4. Final considerations
Awkward real-world situations...
1
Some awkward situations…
https://www.nature.com/articles
/d41586-018-05707-8
https://www.technologyreview.c
om/f/612502/ai-has-a-culturally-
biased-worldview-that-google-
has-a-plan-to-change/
Some awkward situations…
https://ai.googleblog.com/2018/09/introducing-inclusive-images-competition.html
Some awkward situations…
S. Kaufman, S. Rosset, and C. Perlich. Leakage in data mining: Formulation, detection, and avoidance.
In Knowledge Discovery and Data Mining (KDD), 2011
Cancer detection from mammography data is highly
correlated with…PATIENT ID!
Symptoms # Days in Hospital Mammography
Feat. 1
Mammography
Feat. 2
Cancer Diagnosis
abc True
def True
…
True
False
False
False
Patient ID
1
2
...
499
500
501
502
No source obfuscation ⇒ Data leakage!
if (patient ID < 500)
then cancer=True
One more awkward situation…
• Cost-effective Health Care (CEHC) built models to predict probability
of death for patients [Cooper et al. 97]
- Interpretable Machine Learning, Been Kim from Google Brain, Tutorial ICML2017
- Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission,
Caruana et al. ’15
Has asthma? ⇒ Lower risk for pneumonia
Doctors think
he/she is high risk
Aggressive
treatment
Why interpretability?
2
Motivation
• Building trust
• Informing human decision-making
• “Insights can be more valuable than predictions”
• Legal/Ethics
• Prove that the system is legally compliant and that it does not discriminate any
particular group
• General Data Protection Regulation (GDPR) requires explanations on automated
decision-making and profiling (https://gdpr-info.eu)
• Brazilian LGPD, Lei Geral de Proteção de Dados Pessoais
Motivation
• Safety
• Make sure the system is taking safe decisions
• Debugging
• Understand why a system doesn’t work
• Model has good accuracy in validation set but it is not performing well in the wild
• Science
• Comprehend how the system took that conclusion and learn with it
Motivation
• Informing feature engineering
• Create new features using transformations of your raw data or other features to
improve model accuracy
• Directing future data collection
• Understand the value of the features that you already have and, therefore,
decide what new values will be more helpful
Interpretability
• Helps a lot with
• Privacy
• Accountability
• Trust
• Causality
• May not be required when
• No significant consequences
• Well known field
Types of Interpretable Methods
• Pre-modelling explainability
• Understand/describe data used to develop models
• Explainable modelling
• Develop inherently more explainable models
• Post-modelling explainability
• Extract explanations to describe pre-developed models
https://towardsdatascience.com/the-how-of-explainable-ai-post-modelling-explainability-8b4cbc7adf5f
Some practical techniques
and examples
3
Data Visualization
• By understanding well the data,
we know what to expect from
the trained model
Feature Importance
pickup_longitude pickup_latitude dropoff_longitude dropoff_latitude passenger_count fare_amount
-73.982738 40.761270 -73.991242 40.750562 2 5.7
-73.987130 40.733143 -73.991567 40.758092 1 7.7
-73.968095 40.768008 -73.956655 40.783762 1 5.3
-73.980002 40.751662 -73.973802 40.764842 1 7.5
-73.951300 40.774138 -73.990095 40.751048 1 16.5
https://www.kaggle.com/dansbecker/permutation-importance
(from eli5.sklearn import PermutationImportance)
Partial Dependence Plots
• Partial dependence plots show how a feature affects predictions
• Useful to answer questions like:
• How would similarly sized houses be priced in different areas?
• Are predicted health differences between two groups due to differences in
their diets, or due to some other factor?
Partial Dependence Plots (Example)
https://christophm.github.io/interpretable-ml-book/pdp.html
from matplotlib import pyplot as plt
from pdpbox import pdp
partial_plot = pdp.pdp_isolate(model,
dataset, model_features, feature)
pdp.pdp_plot(partial_plot, 'Partial Plot')
plt.show()
LIME (Local Interpretable Model-Agnostic Explanation)
• Do you trust this algorithm to work well in the real word?
• Classification: Wolf or Husky?
https://arxiv.org/abs/1602.04938
"Why Should I Trust You?”, Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
LIME (Local Interpretable Model-Agnostic Explanation)
• Do you trust this algorithm to work well in the real word?
• How do you think the algorithm is able to distinguish between
these photos of wolves and huskies?
• Do you trust this algorithm to work well in the real word?
• How do you think the algorithm is able to distinguish between
these photos of wolves and huskies?
LIME (Local Interpretable Model-Agnostic Explanation)
Snow detector!!!
LIME
1. Select instance of interest.
2. Perturb a dataset with small variations
of instance.
3. Predict these new dataset with the
original model.
4. Sampled instances are weighted by
proximity to the instance of interest
5. Train a weighted, interpretable model
on the dataset with the variations.
6. Explain the prediction by interpreting
the local model.
LIME Example: Tabular Data
grey
blue
A
A. Random forest predictions given
features x1 and x2
LIME Example: Tabular Data
grey
blue
A B
A. Random forest predictions given
features x1 and x2
B. Instance of interest (yellow dot) and
data sampled from a normal
distribution (black dots)
LIME Example: Tabular Data
grey
blue
A B
C
A. Random forest predictions given
features x1 and x2
B. Instance of interest (yellow dot) and
data sampled from a normal
distribution (black dots)
C. Assign higher weight to points near the
instance of interest
LIME Example: Tabular Data
A. Random forest predictions given
features x1 and x2
B. Instance of interest (yellow dot) and
data sampled from a normal
distribution (black dots)
C. Assign higher weight to points near the
instance of interest
D. Colors and signs of the grid show the
classifications of the locally learned
model from the weighted samples
- grey
+ blue
A B
C D
LIME Example: Text
• Text differs from tabular data
• Model Spam classifier
• Randomly removed words
ID CONTENT CLASS
267 PSY is a good guy 0
173 For Christmas Song visit my channel! ;) 1
For Christmas Song visit my channel ;) prob weight
1 0 1 1 0 0 1 0.09 0.57
0 1 1 1 1 0 1 0.09 0.71
1 0 0 1 1 1 1 0.99 0.71
1 0 1 1 1 1 1 0.99 0.86
0 1 1 1 0 0 1 0.09 0.57
Instance of Interest
Sampled Data Instances
LIME Example: Text
• Text differs from tabular data
• Model Spam classifier
• Randomly removed words
ID CONTENT CLASS
267 PSY is a good guy 0
173 For Christmas Song visit my channel! ;) 1
For Christmas Song visit my channel ;) prob weight
1 0 1 1 0 0 1 0.09 0.57
0 1 1 1 1 0 1 0.09 0.71
1 0 0 1 1 1 1 0.99 0.71
1 0 1 1 1 1 1 0.99 0.86
0 1 1 1 0 0 1 0.09 0.57
Instance of Interest
Sampled Data Instances
1 -
𝟓
𝟕
= 0.71
LIME Example: Text
Case Label Prob Feature Feature Weight
1 0.0872151 good 0.000000
1 0.0872151 a 0.000000
1 0.0872151 PSY 0.000000
2 0.9939759 channel! 6.908755
2 0.9939759 visit 0.000000
2 0.9939759 Song 0.000000
• The word “channel” indicates a
high probability of spam
• Non-spam comment with zero
weight estimated
LIME Example: Image
• Image differs from Text and
Tabular Data
• Image segmentation
• Randomly selects one pixel
and turns ON or OFF by color
similarity Original Image Interpretable
Components
LIME Example: Image
Original Image
P(tree frog) = 0.54
Perturbed Instances P(tree frog)
0.85
0.00001
0.52
Explanation
Locally Weighted
regression
Shapley Values
• 1953 - A group of differently skilled participants are all cooperating
with each other for a collective reward. How should the reward be
fairly divided amongst the group?
• Definition: A prediction can be explained by assuming that each
feature of the instance is a player in a game where the prediction is
the payout. The Shapley value – a method from a coalitional game
theory – tells us how to fairly distribute the payout among the
features.
Shapley, Lloyd S. “A value for n-person games.” Contributions to the Theory of Games 2.28 (1953): 307-317.
Shapley Values
• Alice, Bob, Celine write papers as members of a research group
80, if c = {A}
56, if c = {B}
70, if c = {C}
80, if c = {A, B}
85, if c = {A, C}
72, if c = {B, C}
90, if c = {A, B, C}
v(c) =
𝜙𝑖 𝐺 =
1
𝑛!
𝜋∈∏ 𝑛
△ 𝜋
𝐺
(𝑖)
Shapley Values
80, if c = {A}
56, if c = {B}
70, if c = {C}
80, if c = {A, B}
85, if c = {A, C}
72, if c = {B, C}
90, if c = {A, B, C}
v(c) =
𝝅 𝛿 𝜋
𝐺
(A, B, C) (80, 0, 10)
𝜙𝑖 𝐺 =
1
𝑛!
𝜋∈∏ 𝑛
△ 𝜋
𝐺
(𝑖)
Shapley Values
80, if c = {A}
56, if c = {B}
70, if c = {C}
80, if c = {A, B}
85, if c = {A, C}
72, if c = {B, C}
90, if c = {A, B, C}
v(c) =
𝝅 𝛿 𝜋
𝐺
(A, B, C) (80, 0, 10)
(A, C, B) (80, 5, 5)
𝜙𝑖 𝐺 =
1
𝑛!
𝜋∈∏ 𝑛
△ 𝜋
𝐺
(𝑖)
Shapley Values
80, if c = {A}
56, if c = {B}
70, if c = {C}
80, if c = {A, B}
85, if c = {A, C}
72, if c = {B, C}
90, if c = {A, B, C}
v(c) =
𝝅 𝛿 𝜋
𝐺
(A, B, C) (80, 0, 10)
(A, C, B) (80, 5, 5)
(B, A, C) (24, 56, 10)
𝜙𝑖 𝐺 =
1
𝑛!
𝜋∈∏ 𝑛
△ 𝜋
𝐺
(𝑖)
Shapley Values
80, if c = {A}
56, if c = {B}
70, if c = {C}
80, if c = {A, B}
85, if c = {A, C}
72, if c = {B, C}
90, if c = {A, B, C}
v(c) =
𝝅 𝛿 𝜋
𝐺
(A, B, C) (80, 0, 10)
(A, C, B) (80, 5, 5)
(B, A, C) (24, 56, 10)
(B, C, A) (18, 56, 16)
𝜙𝑖 𝐺 =
1
𝑛!
𝜋∈∏ 𝑛
△ 𝜋
𝐺
(𝑖)
Shapley Values
80, if c = {A}
56, if c = {B}
70, if c = {C}
80, if c = {A, B}
85, if c = {A, C}
72, if c = {B, C}
90, if c = {A, B, C}
v(c) =
𝝅 𝛿 𝜋
𝐺
(A, B, C) (80, 0, 10)
(A, C, B) (80, 5, 5)
(B, A, C) (24, 56, 10)
(B, C, A) (18, 56, 16)
(C, A, B) (15, 5, 70)
𝜙𝑖 𝐺 =
1
𝑛!
𝜋∈∏ 𝑛
△ 𝜋
𝐺
(𝑖)
Shapley Values
80, if c = {A}
56, if c = {B}
70, if c = {C}
80, if c = {A, B}
85, if c = {A, C}
72, if c = {B, C}
90, if c = {A, B, C}
v(c) =
𝝅 𝛿 𝜋
𝐺
(A, B, C) (80, 0, 10)
(A, C, B) (80, 5, 5)
(B, A, C) (24, 56, 10)
(B, C, A) (18, 56, 16)
(C, A, B) (15, 5, 70)
(C, B, A) (18, 2, 70)
𝜙𝑖 𝐺 =
1
𝑛!
𝜋∈∏ 𝑛
△ 𝜋
𝐺
(𝑖)
Shapley Values
80, if c = {A}
56, if c = {B}
70, if c = {C}
80, if c = {A, B}
85, if c = {A, C}
72, if c = {B, C}
90, if c = {A, B, C}
v(c) =
𝝅 𝛿 𝜋
𝐺
(A, B, C) (80, 0, 10)
(A, C, B) (80, 5, 5)
(B, A, C) (24, 56, 10)
(B, C, A) (18, 56, 16)
(C, A, B) (15, 5, 70)
(C, B, A) (18, 2, 70)
∅ (39.17, 20.67, 30.17)
𝜙𝑖 𝐺 =
1
𝑛!
𝜋∈∏ 𝑛
△ 𝜋
𝐺
(𝑖)
Other Techniques
• DeepLIFT
• Layerwise Relevance Propagation
• QII, Quantitative Input Influence
• Integrated Gradients
• Grad-CAM
Final Considerations
4
Final Thoughts
Sharayu Rane (2018)
Final Thoughts
• Insane increasing of model’s complexity!
• That requires tools to help interpret it
• Interpretability can assure models’ quality
• High level of generalization, complaint to law and ethics
• Increase models’ acceptance
• Without trust, we may lose ML benefits!
References
• Kaggle Machine Learning Explainability
• https://www.kaggle.com/learn/machine-learning-explainability
• Interpretable Machine Learning, Been Kim
• https://people.csail.mit.edu/beenkim/papers/BeenK_FinaleDV_ICML2017_tutorial.pdf
• How to improve interpretability of machine learning systems
• https://hub.packtpub.com/improve-interpretability-machine-learning-systems
• Why should I trust you? (LIME paper)
• https://arxiv.org/abs/1602.04938
• Distill Feature Visualization
• https://distill.pub/2017/feature-visualization
• Book: A Guide for Making Black Box Models Explainable, Christoph Molnar
• https://christophm.github.io/interpretable-ml-book
References
• GitHubs
• https://github.com/slundberg/shap
• https://github.com/marcotcr/lime
Thank you!!
Interpretability and Adversarial Examples
Protecting Voice Controlled Systems Using Sound Source Identification Based on Acoustic Cues,
Yuan Gong, Christian Poellabauer
Interpretability and Adversarial Examples
Does Interpretability of Neural Networks Imply Adversarial Robustness?
Adam Noack, Isaac Ahern, Dejing Dou, Boyang Li
https://arxiv.org/abs/1912.03430
• This study indicates that
improving interpretability
also helps with adversarial
robustness
Automated ML
Historical Data
Test Set
Training
Set
Model
Training/
Building
Featuring
Importance
Test Model
Predictions
Explaining
Predictions
Deployed
Model
Predict
Creation
Production
Feature Extraction
Streaming
Data
Feature Extraction
Evaluate Results
Good Interpretability
Interpretability
Good
Bad

Mais conteúdo relacionado

Mais procurados

Interpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsInterpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsManojit Nandi
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Krishnaram Kenthapadi
 
Explainable AI in Healthcare
Explainable AI in HealthcareExplainable AI in Healthcare
Explainable AI in Healthcarevonaurum
 
Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Krishnaram Kenthapadi
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Krishnaram Kenthapadi
 
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEUnified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
 
Machine Learning Interpretability / Explainability
Machine Learning Interpretability / ExplainabilityMachine Learning Interpretability / Explainability
Machine Learning Interpretability / ExplainabilityRaouf KESKES
 
Introduction to Interpretable Machine Learning
Introduction to Interpretable Machine LearningIntroduction to Interpretable Machine Learning
Introduction to Interpretable Machine LearningNguyen Giang
 
Explaining Black-Box Machine Learning Predictions - Sameer Singh, Assistant P...
Explaining Black-Box Machine Learning Predictions - Sameer Singh, Assistant P...Explaining Black-Box Machine Learning Predictions - Sameer Singh, Assistant P...
Explaining Black-Box Machine Learning Predictions - Sameer Singh, Assistant P...Sri Ambati
 
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete DeckAI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete DeckSlideTeam
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Sri Ambati
 
Automated Machine Learning (Auto ML)
Automated Machine Learning (Auto ML)Automated Machine Learning (Auto ML)
Automated Machine Learning (Auto ML)Hayim Makabee
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine LearningSri Ambati
 
Automated Machine Learning
Automated Machine LearningAutomated Machine Learning
Automated Machine LearningYuriy Guts
 
Using SHAP to Understand Black Box Models
Using SHAP to Understand Black Box ModelsUsing SHAP to Understand Black Box Models
Using SHAP to Understand Black Box ModelsJonathan Bechtel
 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1DianaGray10
 

Mais procurados (20)

Interpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex modelsInterpretable machine learning : Methods for understanding complex models
Interpretable machine learning : Methods for understanding complex models
 
Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI in Industry (FAT* 2020 Tutorial)
 
Explainable AI in Healthcare
Explainable AI in HealthcareExplainable AI in Healthcare
Explainable AI in Healthcare
 
Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)Explainable AI in Industry (WWW 2020 Tutorial)
Explainable AI in Industry (WWW 2020 Tutorial)
 
Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)Responsible AI in Industry (ICML 2021 Tutorial)
Responsible AI in Industry (ICML 2021 Tutorial)
 
Shap
ShapShap
Shap
 
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEUnified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
 
Machine Learning Interpretability / Explainability
Machine Learning Interpretability / ExplainabilityMachine Learning Interpretability / Explainability
Machine Learning Interpretability / Explainability
 
Introduction to Interpretable Machine Learning
Introduction to Interpretable Machine LearningIntroduction to Interpretable Machine Learning
Introduction to Interpretable Machine Learning
 
Explaining Black-Box Machine Learning Predictions - Sameer Singh, Assistant P...
Explaining Black-Box Machine Learning Predictions - Sameer Singh, Assistant P...Explaining Black-Box Machine Learning Predictions - Sameer Singh, Assistant P...
Explaining Black-Box Machine Learning Predictions - Sameer Singh, Assistant P...
 
Journey of Generative AI
Journey of Generative AIJourney of Generative AI
Journey of Generative AI
 
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete DeckAI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
 
Automated Machine Learning (Auto ML)
Automated Machine Learning (Auto ML)Automated Machine Learning (Auto ML)
Automated Machine Learning (Auto ML)
 
Interpretable Machine Learning
Interpretable Machine LearningInterpretable Machine Learning
Interpretable Machine Learning
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
Automated Machine Learning
Automated Machine LearningAutomated Machine Learning
Automated Machine Learning
 
Using SHAP to Understand Black Box Models
Using SHAP to Understand Black Box ModelsUsing SHAP to Understand Black Box Models
Using SHAP to Understand Black Box Models
 
Intepretability / Explainable AI for Deep Neural Networks
Intepretability / Explainable AI for Deep Neural NetworksIntepretability / Explainable AI for Deep Neural Networks
Intepretability / Explainable AI for Deep Neural Networks
 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
 

Semelhante a Explainable AI

Computational Biology, Part 4 Protein Coding Regions
Computational Biology, Part 4 Protein Coding RegionsComputational Biology, Part 4 Protein Coding Regions
Computational Biology, Part 4 Protein Coding Regionsbutest
 
HyperLogLog Intuition Without Hard Math
HyperLogLog Intuition Without Hard MathHyperLogLog Intuition Without Hard Math
HyperLogLog Intuition Without Hard MathSimeon Simeonov
 
Cs221 lecture5-fall11
Cs221 lecture5-fall11Cs221 lecture5-fall11
Cs221 lecture5-fall11darwinrlo
 
Machine learning and_nlp
Machine learning and_nlpMachine learning and_nlp
Machine learning and_nlpankit_ppt
 
Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018HJ van Veen
 
Explainability for Learning to Rank
Explainability for Learning to RankExplainability for Learning to Rank
Explainability for Learning to RankSease
 
notes as .ppt
notes as .pptnotes as .ppt
notes as .pptbutest
 
04-Data-Analysis-Overview.pptx
04-Data-Analysis-Overview.pptx04-Data-Analysis-Overview.pptx
04-Data-Analysis-Overview.pptxShree Shree
 
12.Data processing and concepts.pdf
12.Data processing and concepts.pdf12.Data processing and concepts.pdf
12.Data processing and concepts.pdfAyele40
 
The importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsThe importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsFrancesca Lazzeri, PhD
 
Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIPramit Choudhary
 
Machine Learning Tutorial Part - 2 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 2 | Machine Learning Tutorial For Beginners ...Machine Learning Tutorial Part - 2 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 2 | Machine Learning Tutorial For Beginners ...Simplilearn
 
It Probably Works - QCon 2015
It Probably Works - QCon 2015It Probably Works - QCon 2015
It Probably Works - QCon 2015Fastly
 
How do OpenAI GPT Models Work - Misconceptions and Tips for Developers
How do OpenAI GPT Models Work - Misconceptions and Tips for DevelopersHow do OpenAI GPT Models Work - Misconceptions and Tips for Developers
How do OpenAI GPT Models Work - Misconceptions and Tips for DevelopersIvo Andreev
 
Barga Data Science lecture 9
Barga Data Science lecture 9Barga Data Science lecture 9
Barga Data Science lecture 9Roger Barga
 

Semelhante a Explainable AI (20)

Computational Biology, Part 4 Protein Coding Regions
Computational Biology, Part 4 Protein Coding RegionsComputational Biology, Part 4 Protein Coding Regions
Computational Biology, Part 4 Protein Coding Regions
 
HyperLogLog Intuition Without Hard Math
HyperLogLog Intuition Without Hard MathHyperLogLog Intuition Without Hard Math
HyperLogLog Intuition Without Hard Math
 
Cs221 lecture5-fall11
Cs221 lecture5-fall11Cs221 lecture5-fall11
Cs221 lecture5-fall11
 
Machine learning and_nlp
Machine learning and_nlpMachine learning and_nlp
Machine learning and_nlp
 
Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018Hacking Predictive Modeling - RoadSec 2018
Hacking Predictive Modeling - RoadSec 2018
 
Explainability for Learning to Rank
Explainability for Learning to RankExplainability for Learning to Rank
Explainability for Learning to Rank
 
LR2. Summary Day 2
LR2. Summary Day 2LR2. Summary Day 2
LR2. Summary Day 2
 
Lecture 1
Lecture 1Lecture 1
Lecture 1
 
lec1.ppt
lec1.pptlec1.ppt
lec1.ppt
 
notes as .ppt
notes as .pptnotes as .ppt
notes as .ppt
 
Classification
ClassificationClassification
Classification
 
04-Data-Analysis-Overview.pptx
04-Data-Analysis-Overview.pptx04-Data-Analysis-Overview.pptx
04-Data-Analysis-Overview.pptx
 
12.Data processing and concepts.pdf
12.Data processing and concepts.pdf12.Data processing and concepts.pdf
12.Data processing and concepts.pdf
 
The importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systemsThe importance of model fairness and interpretability in AI systems
The importance of model fairness and interpretability in AI systems
 
Human in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AIHuman in the loop: Bayesian Rules Enabling Explainable AI
Human in the loop: Bayesian Rules Enabling Explainable AI
 
MyStataLab Assignment Help
MyStataLab Assignment HelpMyStataLab Assignment Help
MyStataLab Assignment Help
 
Machine Learning Tutorial Part - 2 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 2 | Machine Learning Tutorial For Beginners ...Machine Learning Tutorial Part - 2 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 2 | Machine Learning Tutorial For Beginners ...
 
It Probably Works - QCon 2015
It Probably Works - QCon 2015It Probably Works - QCon 2015
It Probably Works - QCon 2015
 
How do OpenAI GPT Models Work - Misconceptions and Tips for Developers
How do OpenAI GPT Models Work - Misconceptions and Tips for DevelopersHow do OpenAI GPT Models Work - Misconceptions and Tips for Developers
How do OpenAI GPT Models Work - Misconceptions and Tips for Developers
 
Barga Data Science lecture 9
Barga Data Science lecture 9Barga Data Science lecture 9
Barga Data Science lecture 9
 

Último

Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service BangaloreCall Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangaloreamitlee9823
 
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdfAccredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdfadriantubila
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxolyaivanovalion
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...amitlee9823
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...ZurliaSoop
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...amitlee9823
 
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Standamitlee9823
 
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...amitlee9823
 
April 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's AnalysisApril 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's Analysismanisha194592
 
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...amitlee9823
 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsJoseMangaJr1
 
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...amitlee9823
 
BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxolyaivanovalion
 
Midocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxMidocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxolyaivanovalion
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Valters Lauzums
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxolyaivanovalion
 

Último (20)

Predicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science ProjectPredicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science Project
 
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service BangaloreCall Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
Call Girls Begur Just Call 👗 7737669865 👗 Top Class Call Girl Service Bangalore
 
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdfAccredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
 
BigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptxBigBuy dropshipping via API with DroFx.pptx
BigBuy dropshipping via API with DroFx.pptx
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Surabaya ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
 
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Stand
 
Sampling (random) method and Non random.ppt
Sampling (random) method and Non random.pptSampling (random) method and Non random.ppt
Sampling (random) method and Non random.ppt
 
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
 
April 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's AnalysisApril 2024 - Crypto Market Report's Analysis
April 2024 - Crypto Market Report's Analysis
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
 
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
 
Probability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter LessonsProbability Grade 10 Third Quarter Lessons
Probability Grade 10 Third Quarter Lessons
 
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
Mg Road Call Girls Service: 🍓 7737669865 🍓 High Profile Model Escorts | Banga...
 
BabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptxBabyOno dropshipping via API with DroFx.pptx
BabyOno dropshipping via API with DroFx.pptx
 
Midocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFxMidocean dropshipping via API with DroFx
Midocean dropshipping via API with DroFx
 
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
Digital Advertising Lecture for Advanced Digital & Social Media Strategy at U...
 
CebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptxCebaBaby dropshipping via API with DroFX.pptx
CebaBaby dropshipping via API with DroFX.pptx
 

Explainable AI

  • 1. Explainable AI Wagston Staehler & Luciano Alves March, 2020
  • 2. The Presenters Wagston Staehler Computer Engineer MSc Computer Science Researcher at HP Luciano Alves BSc Computer Science MSc Computer Science ML Engineer at HP
  • 3. Machine Learning Dataset  Train Model Fine tuning Optimization …  Accuracy  Precision/Recall  Error  Deploy!  Real-world!  Data has meaning!
  • 4. Interpretability and Explainability • Interpretability: to establish a cause and effect relationship • Explainability: to explain the internal mechanics of ML or DL in human terms • Most of the time, the terms are interchangeable • Interpretability, explainability, explainable AI, XAI
  • 5. Agenda 1. Awkward real-world situations… 2. Why interpretability? 3. Some techniques and examples 4. Final considerations
  • 9. Some awkward situations… S. Kaufman, S. Rosset, and C. Perlich. Leakage in data mining: Formulation, detection, and avoidance. In Knowledge Discovery and Data Mining (KDD), 2011 Cancer detection from mammography data is highly correlated with…PATIENT ID! Symptoms # Days in Hospital Mammography Feat. 1 Mammography Feat. 2 Cancer Diagnosis abc True def True … True False False False Patient ID 1 2 ... 499 500 501 502 No source obfuscation ⇒ Data leakage! if (patient ID < 500) then cancer=True
  • 10. One more awkward situation… • Cost-effective Health Care (CEHC) built models to predict probability of death for patients [Cooper et al. 97] - Interpretable Machine Learning, Been Kim from Google Brain, Tutorial ICML2017 - Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission, Caruana et al. ’15 Has asthma? ⇒ Lower risk for pneumonia Doctors think he/she is high risk Aggressive treatment
  • 12. Motivation • Building trust • Informing human decision-making • “Insights can be more valuable than predictions” • Legal/Ethics • Prove that the system is legally compliant and that it does not discriminate any particular group • General Data Protection Regulation (GDPR) requires explanations on automated decision-making and profiling (https://gdpr-info.eu) • Brazilian LGPD, Lei Geral de Proteção de Dados Pessoais
  • 13. Motivation • Safety • Make sure the system is taking safe decisions • Debugging • Understand why a system doesn’t work • Model has good accuracy in validation set but it is not performing well in the wild • Science • Comprehend how the system took that conclusion and learn with it
  • 14. Motivation • Informing feature engineering • Create new features using transformations of your raw data or other features to improve model accuracy • Directing future data collection • Understand the value of the features that you already have and, therefore, decide what new values will be more helpful
  • 15. Interpretability • Helps a lot with • Privacy • Accountability • Trust • Causality • May not be required when • No significant consequences • Well known field
  • 16. Types of Interpretable Methods • Pre-modelling explainability • Understand/describe data used to develop models • Explainable modelling • Develop inherently more explainable models • Post-modelling explainability • Extract explanations to describe pre-developed models https://towardsdatascience.com/the-how-of-explainable-ai-post-modelling-explainability-8b4cbc7adf5f
  • 18. Data Visualization • By understanding well the data, we know what to expect from the trained model
  • 19. Feature Importance pickup_longitude pickup_latitude dropoff_longitude dropoff_latitude passenger_count fare_amount -73.982738 40.761270 -73.991242 40.750562 2 5.7 -73.987130 40.733143 -73.991567 40.758092 1 7.7 -73.968095 40.768008 -73.956655 40.783762 1 5.3 -73.980002 40.751662 -73.973802 40.764842 1 7.5 -73.951300 40.774138 -73.990095 40.751048 1 16.5 https://www.kaggle.com/dansbecker/permutation-importance (from eli5.sklearn import PermutationImportance)
  • 20. Partial Dependence Plots • Partial dependence plots show how a feature affects predictions • Useful to answer questions like: • How would similarly sized houses be priced in different areas? • Are predicted health differences between two groups due to differences in their diets, or due to some other factor?
  • 21. Partial Dependence Plots (Example) https://christophm.github.io/interpretable-ml-book/pdp.html from matplotlib import pyplot as plt from pdpbox import pdp partial_plot = pdp.pdp_isolate(model, dataset, model_features, feature) pdp.pdp_plot(partial_plot, 'Partial Plot') plt.show()
  • 22. LIME (Local Interpretable Model-Agnostic Explanation) • Do you trust this algorithm to work well in the real word? • Classification: Wolf or Husky? https://arxiv.org/abs/1602.04938 "Why Should I Trust You?”, Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
  • 23. LIME (Local Interpretable Model-Agnostic Explanation) • Do you trust this algorithm to work well in the real word? • How do you think the algorithm is able to distinguish between these photos of wolves and huskies?
  • 24. • Do you trust this algorithm to work well in the real word? • How do you think the algorithm is able to distinguish between these photos of wolves and huskies? LIME (Local Interpretable Model-Agnostic Explanation) Snow detector!!!
  • 25. LIME 1. Select instance of interest. 2. Perturb a dataset with small variations of instance. 3. Predict these new dataset with the original model. 4. Sampled instances are weighted by proximity to the instance of interest 5. Train a weighted, interpretable model on the dataset with the variations. 6. Explain the prediction by interpreting the local model.
  • 26. LIME Example: Tabular Data grey blue A A. Random forest predictions given features x1 and x2
  • 27. LIME Example: Tabular Data grey blue A B A. Random forest predictions given features x1 and x2 B. Instance of interest (yellow dot) and data sampled from a normal distribution (black dots)
  • 28. LIME Example: Tabular Data grey blue A B C A. Random forest predictions given features x1 and x2 B. Instance of interest (yellow dot) and data sampled from a normal distribution (black dots) C. Assign higher weight to points near the instance of interest
  • 29. LIME Example: Tabular Data A. Random forest predictions given features x1 and x2 B. Instance of interest (yellow dot) and data sampled from a normal distribution (black dots) C. Assign higher weight to points near the instance of interest D. Colors and signs of the grid show the classifications of the locally learned model from the weighted samples - grey + blue A B C D
  • 30. LIME Example: Text • Text differs from tabular data • Model Spam classifier • Randomly removed words ID CONTENT CLASS 267 PSY is a good guy 0 173 For Christmas Song visit my channel! ;) 1 For Christmas Song visit my channel ;) prob weight 1 0 1 1 0 0 1 0.09 0.57 0 1 1 1 1 0 1 0.09 0.71 1 0 0 1 1 1 1 0.99 0.71 1 0 1 1 1 1 1 0.99 0.86 0 1 1 1 0 0 1 0.09 0.57 Instance of Interest Sampled Data Instances
  • 31. LIME Example: Text • Text differs from tabular data • Model Spam classifier • Randomly removed words ID CONTENT CLASS 267 PSY is a good guy 0 173 For Christmas Song visit my channel! ;) 1 For Christmas Song visit my channel ;) prob weight 1 0 1 1 0 0 1 0.09 0.57 0 1 1 1 1 0 1 0.09 0.71 1 0 0 1 1 1 1 0.99 0.71 1 0 1 1 1 1 1 0.99 0.86 0 1 1 1 0 0 1 0.09 0.57 Instance of Interest Sampled Data Instances 1 - 𝟓 𝟕 = 0.71
  • 32. LIME Example: Text Case Label Prob Feature Feature Weight 1 0.0872151 good 0.000000 1 0.0872151 a 0.000000 1 0.0872151 PSY 0.000000 2 0.9939759 channel! 6.908755 2 0.9939759 visit 0.000000 2 0.9939759 Song 0.000000 • The word “channel” indicates a high probability of spam • Non-spam comment with zero weight estimated
  • 33. LIME Example: Image • Image differs from Text and Tabular Data • Image segmentation • Randomly selects one pixel and turns ON or OFF by color similarity Original Image Interpretable Components
  • 34. LIME Example: Image Original Image P(tree frog) = 0.54 Perturbed Instances P(tree frog) 0.85 0.00001 0.52 Explanation Locally Weighted regression
  • 35. Shapley Values • 1953 - A group of differently skilled participants are all cooperating with each other for a collective reward. How should the reward be fairly divided amongst the group? • Definition: A prediction can be explained by assuming that each feature of the instance is a player in a game where the prediction is the payout. The Shapley value – a method from a coalitional game theory – tells us how to fairly distribute the payout among the features. Shapley, Lloyd S. “A value for n-person games.” Contributions to the Theory of Games 2.28 (1953): 307-317.
  • 36. Shapley Values • Alice, Bob, Celine write papers as members of a research group 80, if c = {A} 56, if c = {B} 70, if c = {C} 80, if c = {A, B} 85, if c = {A, C} 72, if c = {B, C} 90, if c = {A, B, C} v(c) = 𝜙𝑖 𝐺 = 1 𝑛! 𝜋∈∏ 𝑛 △ 𝜋 𝐺 (𝑖)
  • 37. Shapley Values 80, if c = {A} 56, if c = {B} 70, if c = {C} 80, if c = {A, B} 85, if c = {A, C} 72, if c = {B, C} 90, if c = {A, B, C} v(c) = 𝝅 𝛿 𝜋 𝐺 (A, B, C) (80, 0, 10) 𝜙𝑖 𝐺 = 1 𝑛! 𝜋∈∏ 𝑛 △ 𝜋 𝐺 (𝑖)
  • 38. Shapley Values 80, if c = {A} 56, if c = {B} 70, if c = {C} 80, if c = {A, B} 85, if c = {A, C} 72, if c = {B, C} 90, if c = {A, B, C} v(c) = 𝝅 𝛿 𝜋 𝐺 (A, B, C) (80, 0, 10) (A, C, B) (80, 5, 5) 𝜙𝑖 𝐺 = 1 𝑛! 𝜋∈∏ 𝑛 △ 𝜋 𝐺 (𝑖)
  • 39. Shapley Values 80, if c = {A} 56, if c = {B} 70, if c = {C} 80, if c = {A, B} 85, if c = {A, C} 72, if c = {B, C} 90, if c = {A, B, C} v(c) = 𝝅 𝛿 𝜋 𝐺 (A, B, C) (80, 0, 10) (A, C, B) (80, 5, 5) (B, A, C) (24, 56, 10) 𝜙𝑖 𝐺 = 1 𝑛! 𝜋∈∏ 𝑛 △ 𝜋 𝐺 (𝑖)
  • 40. Shapley Values 80, if c = {A} 56, if c = {B} 70, if c = {C} 80, if c = {A, B} 85, if c = {A, C} 72, if c = {B, C} 90, if c = {A, B, C} v(c) = 𝝅 𝛿 𝜋 𝐺 (A, B, C) (80, 0, 10) (A, C, B) (80, 5, 5) (B, A, C) (24, 56, 10) (B, C, A) (18, 56, 16) 𝜙𝑖 𝐺 = 1 𝑛! 𝜋∈∏ 𝑛 △ 𝜋 𝐺 (𝑖)
  • 41. Shapley Values 80, if c = {A} 56, if c = {B} 70, if c = {C} 80, if c = {A, B} 85, if c = {A, C} 72, if c = {B, C} 90, if c = {A, B, C} v(c) = 𝝅 𝛿 𝜋 𝐺 (A, B, C) (80, 0, 10) (A, C, B) (80, 5, 5) (B, A, C) (24, 56, 10) (B, C, A) (18, 56, 16) (C, A, B) (15, 5, 70) 𝜙𝑖 𝐺 = 1 𝑛! 𝜋∈∏ 𝑛 △ 𝜋 𝐺 (𝑖)
  • 42. Shapley Values 80, if c = {A} 56, if c = {B} 70, if c = {C} 80, if c = {A, B} 85, if c = {A, C} 72, if c = {B, C} 90, if c = {A, B, C} v(c) = 𝝅 𝛿 𝜋 𝐺 (A, B, C) (80, 0, 10) (A, C, B) (80, 5, 5) (B, A, C) (24, 56, 10) (B, C, A) (18, 56, 16) (C, A, B) (15, 5, 70) (C, B, A) (18, 2, 70) 𝜙𝑖 𝐺 = 1 𝑛! 𝜋∈∏ 𝑛 △ 𝜋 𝐺 (𝑖)
  • 43. Shapley Values 80, if c = {A} 56, if c = {B} 70, if c = {C} 80, if c = {A, B} 85, if c = {A, C} 72, if c = {B, C} 90, if c = {A, B, C} v(c) = 𝝅 𝛿 𝜋 𝐺 (A, B, C) (80, 0, 10) (A, C, B) (80, 5, 5) (B, A, C) (24, 56, 10) (B, C, A) (18, 56, 16) (C, A, B) (15, 5, 70) (C, B, A) (18, 2, 70) ∅ (39.17, 20.67, 30.17) 𝜙𝑖 𝐺 = 1 𝑛! 𝜋∈∏ 𝑛 △ 𝜋 𝐺 (𝑖)
  • 44. Other Techniques • DeepLIFT • Layerwise Relevance Propagation • QII, Quantitative Input Influence • Integrated Gradients • Grad-CAM
  • 47. Final Thoughts • Insane increasing of model’s complexity! • That requires tools to help interpret it • Interpretability can assure models’ quality • High level of generalization, complaint to law and ethics • Increase models’ acceptance • Without trust, we may lose ML benefits!
  • 48. References • Kaggle Machine Learning Explainability • https://www.kaggle.com/learn/machine-learning-explainability • Interpretable Machine Learning, Been Kim • https://people.csail.mit.edu/beenkim/papers/BeenK_FinaleDV_ICML2017_tutorial.pdf • How to improve interpretability of machine learning systems • https://hub.packtpub.com/improve-interpretability-machine-learning-systems • Why should I trust you? (LIME paper) • https://arxiv.org/abs/1602.04938 • Distill Feature Visualization • https://distill.pub/2017/feature-visualization • Book: A Guide for Making Black Box Models Explainable, Christoph Molnar • https://christophm.github.io/interpretable-ml-book
  • 51. Interpretability and Adversarial Examples Protecting Voice Controlled Systems Using Sound Source Identification Based on Acoustic Cues, Yuan Gong, Christian Poellabauer
  • 52. Interpretability and Adversarial Examples Does Interpretability of Neural Networks Imply Adversarial Robustness? Adam Noack, Isaac Ahern, Dejing Dou, Boyang Li https://arxiv.org/abs/1912.03430 • This study indicates that improving interpretability also helps with adversarial robustness
  • 53. Automated ML Historical Data Test Set Training Set Model Training/ Building Featuring Importance Test Model Predictions Explaining Predictions Deployed Model Predict Creation Production Feature Extraction Streaming Data Feature Extraction Evaluate Results Good Interpretability Interpretability Good Bad