SlideShare uma empresa Scribd logo
1 de 45
Baixar para ler offline
Challenges in Deploying
Explainable Machine Learning
Umang Bhatt
PhD Student, University of Cambridge
Research Fellow, Partnership on AI
Student Fellow, Leverhulme Center for the Future of Intelligence
@umangsbhatt
This Talk
1. How are existing approaches to explainability
used in practice?

2. Can existing explainability tools be used to ensure
model unfairness?

3. How can we create explainability tools for external
stakeholders?
This Talk
1. How are existing approaches to explainability
used in practice?

2. Can existing explainability tools be used to ensure
model unfairness?

3. How can we create explainability tools for external
stakeholders?
Explainable Machine Learning
in Deployment
Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly,
Yuhuan Jia, Joydeep Ghosh, Ruchir Puri, José Moura, and Peter Eckersley
Appeared at the ACM Conference on Fairness, Accountability, and Transparency 2020

https://arxiv.org/abs/1909.06342
Growth of Transparency Literature
Many algorithms proposed to “explain” machine learning
model output

We study how organizations use these algorithms, if at all
Our Approach
30 minute to 2 hour semi-structured interviews

50 individuals from 30 organizations interviewed
Shared Language
• Transparency: Providing stakeholders with relevant information about
how the model works: this includes documentation of the training
procedure, analysis of training data distribution, code releases, feature-
level explanations, etc.

• Explainability: Providing insights into a model’s behavior for specific
datapoint(s)
Our Questions
• What type of explanations have you used (e.g., feature-based, sample-
based, counterfactual, or natural language)?

• Who is the audience for the model explanation (e.g., research scientists,
product managers, domain experts, or users)?

• In what context have you deployed the explanations (e.g., informing the
development process, informing human decision makers about the model,
or informing the end user on how actions were taken based on the
model’s output)?
Types of Explanations
Feature Importance Sample Importance Counterfactuals
Stakeholders
Executives Engineers End Users Regulators
Findings
1.Explainability is used for debugging internally 

2.Goals of explainability are not clearly defined
within organizations

3.Technical limitations make explainability hard
to deploy in real-time
Findings
1.Explainability is used for debugging internally 

2.Goals of explainability are not clearly defined
within organizations

3.Technical limitations make explainability hard
to deploy in real-time
Use Cases
Findings
1.Explainability is used for debugging internally 

2.Goals of explainability are not clearly defined
within organizations

3.Technical limitations make explainability hard
to deploy in real-time
Establishing Explainability Goals
1
Identify
stakeholders
2
Engage
stakeholders
W
hatpurpose
w
ith
the
explanation
serve?
3
Devise
w
orkflow
How
w
illthe
explanation
be
used
in
practice?
W
ho
w
illconsum
e
the
explanation?
Findings
1.Explainability is used for debugging internally 

2.Goals of explainability are not clearly defined
within organizations

3.Technical limitations make explainability hard
to deploy in real-time
Limitations
• Spurious correlations exposed by feature level
explanations

• No causal underpinnings to the models themselves
Limitations (cont.)
• Sample importance is computationally infeasible to
deploy at scale

• Privacy concerns of model inversion exist
Findings
1.Explainability is used for debugging internally 

2.Goals of explainability are not clearly defined
within organizations

3.Technical limitations make explainability hard
to deploy in real-time
This Talk
1. How are existing approaches to explainability
used in practice? 

2. Can existing explainability tools be used to ensure
model unfairness?

3. How can we create explainability tools for external
stakeholders?
This Talk
1. How are existing approaches to explainability
used in practice? Only used by developers

2. Can existing explainability tools be used to ensure
model unfairness?

3. How can we create explainability tools for external
stakeholders?
This Talk
1. How are existing approaches to explainability
used in practice? Only used by developers

2. Can existing explainability tools be used to ensure
model unfairness? 

3. How can we create explainability tools for external
stakeholders?
You Shouldn’t Trust Me: Learning
Models Which Conceal Unfairness
From Multiple Explanation Methods
Botty Dimanov, Umang Bhatt, Mateja Jamnik, and Adrian Weller
To appear at the European Conference on Artificial Intelligence 2020

http://ecai2020.eu/papers/72_paper.pdf
Takeaway
Feature importance reveals nothing reliable about model fairness.
Why do we care?
FeatureImportance
Feature
FeatureImportance
Feature
Model A Model B
Why do we care?
FeatureImportance
Feature
FeatureImportance
Feature
Model A Model B
Unfair Fair
Can we manipulate explanations?
Modified explanations via adversarial perturbations of inputs
• Ghorbani, Abid, and Zou. AAAI 2019
• Dombrowski et al. NeurIPS 2019
• Slack et al. AIES 2019
Control visual explanations via adversarial perturbations of parameters
• Heo, Joo, and Moon. NeurIPS 2019
Downgrade explanations via adversarial
perturbations of parameters to hide unfairness
fθ fθ+δ
f : X ↦ Y
Explanation g(f, x)j
Our Setup
Classifier Our Goal
∀i, fθ+δ(x(i)
) ≈ fθ(x(i)
)
Desirable Properties
Model Similarity
∀i, |g(fθ+δ, x(i)
)j | ≪ |g(fθ, x(i)
)j |Low Target Feature Attribution
Our Method
argminδ L′ = L(fθ+δ, x, y) +
α
n
∇X:,j
L(fθ+δ, x, y)
p
Adversarial Explanation Attack
Results (Importance Ranking)
fθ fθ+δ
Results (Importance Ranking)
fθ fθ+δ
Our adversarial explanation
attack:
1.Significantly decreases
relative importance.
2.Generalizes to test points.
3.Transfers across explanation
methods.
Findings
• Little change in accuracy, but
difference in outputs is detectable 

• Low attribution achieved with
respect to multiple explanation
methods

• High unfairness across multiple
fairness metrics (compared to
holding feature constant)
fθ fθ+δ
This Talk
1. How are existing approaches to explainability
used in practice? Only used by developers 

2. Can existing explainability tools be used to ensure
model unfairness?

3. How can we create explainability tools for external
stakeholders?
This Talk
1. How are existing approaches to explainability
used in practice? Only used by developers 

2. Can existing explainability tools be used to ensure
model unfairness? Not in their current form

3. How can we create explainability tools for external
stakeholders?
This Talk
1. How are existing approaches to explainability
used in practice? Only used by developers

2. Can existing explainability tools be used to ensure
model unfairness? Not in their current form

3. How can we create explainability tools for external
stakeholders?
Machine Learning Explainability
for External Stakeholders
Umang Bhatt, McKane Andrus, Adrian Weller, and Alice Xiang
Convening hosted by CFI, PAI, and IBM

Appeared at the ICML 2020 Workshop on Extending Explainable AI: Beyond Deep Models and Classifiers

https://arxiv.org/abs/2007.05408
Overview
• 33 participants from 5 countries

• 15 ML experts, 3 designers, 6 legal experts, 9 policymakers

• Domain expertise: Finance, Healthcare, Media, and Social Services

• Goal: facilitate an inter-stakeholder conversation around explainable
machine learning
Two Takeaways
1. There is a need for community engagement in the development of
explainable machine learning.

2. There are nuances in the deployment of explainable machine learning.
Community Engagement
1. In which context will this explanation be used? Does the context change
the properties of the explanations we expose?
2. How should the explanation be evaluated? Both quantitatively and
qualitatively…
3. Can we prevent data misuse and preferential treatment by involving
affected groups in the development process?
4. Can we educate external stakeholders (and data scientists) regarding the
functionalities and limitations of explainable machine learning?
Deploying Explainability
1. How does uncertainty in the model and introduced by the (potentially
approximate) explanation technique affect the resulting explanations?
2. How can stakeholders interact with the resulting explanations? Can
explanations be a conduit for interacting with the model?
3. How, if at all, will stakeholder behavior change as a result of the
explanation shown?
4. Over time, how will the explanation technique adapt to changes in
stakeholder behavior?
Two Takeaways
1. There is a need for community engagement in the development of
explainable machine learning.

2. There are nuances in the deployment of explainable machine learning.
This Talk
1. How are existing approaches to explainability
used in practice? Only used by developers

2. Can existing explainability tools be used to ensure
model unfairness? Not in their current form

3. How can we create explainability tools for external
stakeholders?
This Talk
1. How are existing approaches to explainability
used in practice? Only used by developers

2. Can existing explainability tools be used to ensure
model unfairness? Not in their current form

3. How can we create explainability tools for external
stakeholders? Community engagement and
thoughtful deployment
This Talk
1. How are existing approaches to explainability
used in practice? Only used by developers 

2. Can existing explainability tools be used to ensure
model unfairness? Not in their current form

3. How can we create explainability tools for external
stakeholders? Community engagement and
thoughtful deployment
Challenges in Deploying
Explainable Machine Learning
Umang Bhatt
usb20@cam.ac.uk
@umangsbhatt
Thanks for listening!

Mais conteúdo relacionado

Semelhante a Rsqrd AI - Challenges in Deploying Explainable Machine Learning

Discussion - Weeks 1–2COLLAPSETop of FormShared Practice—Rol.docx
Discussion - Weeks 1–2COLLAPSETop of FormShared Practice—Rol.docxDiscussion - Weeks 1–2COLLAPSETop of FormShared Practice—Rol.docx
Discussion - Weeks 1–2COLLAPSETop of FormShared Practice—Rol.docx
cuddietheresa
 
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and moreifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
hen_drik
 

Semelhante a Rsqrd AI - Challenges in Deploying Explainable Machine Learning (20)

Discussion - Weeks 1–2COLLAPSETop of FormShared Practice—Rol.docx
Discussion - Weeks 1–2COLLAPSETop of FormShared Practice—Rol.docxDiscussion - Weeks 1–2COLLAPSETop of FormShared Practice—Rol.docx
Discussion - Weeks 1–2COLLAPSETop of FormShared Practice—Rol.docx
 
Product Analyst Advisor
Product Analyst AdvisorProduct Analyst Advisor
Product Analyst Advisor
 
Oleksander Krakovetskyi "Explaining a Machine Learning blackbox"
Oleksander Krakovetskyi "Explaining a Machine Learning blackbox"Oleksander Krakovetskyi "Explaining a Machine Learning blackbox"
Oleksander Krakovetskyi "Explaining a Machine Learning blackbox"
 
SlideEgg_100784-The Rise Of Generative AI And ChatGPT.pptx
SlideEgg_100784-The Rise Of Generative AI And ChatGPT.pptxSlideEgg_100784-The Rise Of Generative AI And ChatGPT.pptx
SlideEgg_100784-The Rise Of Generative AI And ChatGPT.pptx
 
Empowering Future-Ready Students: Teaching AI Ethics and Information Literacy...
Empowering Future-Ready Students: Teaching AI Ethics and Information Literacy...Empowering Future-Ready Students: Teaching AI Ethics and Information Literacy...
Empowering Future-Ready Students: Teaching AI Ethics and Information Literacy...
 
Career Counseling Application Prototype.pptx
Career Counseling Application Prototype.pptxCareer Counseling Application Prototype.pptx
Career Counseling Application Prototype.pptx
 
RE2021 tutorial human values in requirements engineering
RE2021 tutorial   human values in requirements engineeringRE2021 tutorial   human values in requirements engineering
RE2021 tutorial human values in requirements engineering
 
UX Conference on UX Research Trends in 2024
UX Conference on UX Research Trends in 2024UX Conference on UX Research Trends in 2024
UX Conference on UX Research Trends in 2024
 
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx20240104 HICSS  Panel on AI and Legal Ethical 20240103 v7.pptx
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptx
 
Usability Testing: Making it fast, good, and cheap
Usability Testing: Making it fast, good, and cheapUsability Testing: Making it fast, good, and cheap
Usability Testing: Making it fast, good, and cheap
 
Elder Abuse Research
Elder Abuse ResearchElder Abuse Research
Elder Abuse Research
 
Balanced Evaluation Paper
Balanced Evaluation PaperBalanced Evaluation Paper
Balanced Evaluation Paper
 
Requirements Engineering for the Humanities
Requirements Engineering for the HumanitiesRequirements Engineering for the Humanities
Requirements Engineering for the Humanities
 
Exploration & Promotion: Implementation Strategies of Corporate Social Software
Exploration & Promotion: Implementation Strategies of Corporate Social SoftwareExploration & Promotion: Implementation Strategies of Corporate Social Software
Exploration & Promotion: Implementation Strategies of Corporate Social Software
 
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and moreifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
ifib Lunchbag: CHI2018 Highlights - Algorithms in (Social) Practice and more
 
BA and Beyond 19 Sponsor spotlight - Namahn - Beating complexity with complexity
BA and Beyond 19 Sponsor spotlight - Namahn - Beating complexity with complexityBA and Beyond 19 Sponsor spotlight - Namahn - Beating complexity with complexity
BA and Beyond 19 Sponsor spotlight - Namahn - Beating complexity with complexity
 
The Roles of Social Networks In Knowledge Management
The Roles of Social Networks In Knowledge ManagementThe Roles of Social Networks In Knowledge Management
The Roles of Social Networks In Knowledge Management
 
IEEE augmented reality learning experience model (ARLEM)
IEEE augmented reality learning experience model (ARLEM)IEEE augmented reality learning experience model (ARLEM)
IEEE augmented reality learning experience model (ARLEM)
 
Digital Communication Power Tools: Speakers Notes version
Digital Communication Power Tools: Speakers Notes versionDigital Communication Power Tools: Speakers Notes version
Digital Communication Power Tools: Speakers Notes version
 
2311 EAAMO
2311 EAAMO2311 EAAMO
2311 EAAMO
 

Último

Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Victor Rentea
 

Último (20)

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 

Rsqrd AI - Challenges in Deploying Explainable Machine Learning

  • 1. Challenges in Deploying Explainable Machine Learning Umang Bhatt PhD Student, University of Cambridge Research Fellow, Partnership on AI Student Fellow, Leverhulme Center for the Future of Intelligence @umangsbhatt
  • 2. This Talk 1. How are existing approaches to explainability used in practice? 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  • 3. This Talk 1. How are existing approaches to explainability used in practice? 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  • 4. Explainable Machine Learning in Deployment Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yuhuan Jia, Joydeep Ghosh, Ruchir Puri, José Moura, and Peter Eckersley Appeared at the ACM Conference on Fairness, Accountability, and Transparency 2020 https://arxiv.org/abs/1909.06342
  • 5. Growth of Transparency Literature Many algorithms proposed to “explain” machine learning model output We study how organizations use these algorithms, if at all
  • 6. Our Approach 30 minute to 2 hour semi-structured interviews 50 individuals from 30 organizations interviewed
  • 7. Shared Language • Transparency: Providing stakeholders with relevant information about how the model works: this includes documentation of the training procedure, analysis of training data distribution, code releases, feature- level explanations, etc. • Explainability: Providing insights into a model’s behavior for specific datapoint(s)
  • 8. Our Questions • What type of explanations have you used (e.g., feature-based, sample- based, counterfactual, or natural language)? • Who is the audience for the model explanation (e.g., research scientists, product managers, domain experts, or users)? • In what context have you deployed the explanations (e.g., informing the development process, informing human decision makers about the model, or informing the end user on how actions were taken based on the model’s output)?
  • 9. Types of Explanations Feature Importance Sample Importance Counterfactuals
  • 11. Findings 1.Explainability is used for debugging internally 2.Goals of explainability are not clearly defined within organizations 3.Technical limitations make explainability hard to deploy in real-time
  • 12. Findings 1.Explainability is used for debugging internally 2.Goals of explainability are not clearly defined within organizations 3.Technical limitations make explainability hard to deploy in real-time
  • 14. Findings 1.Explainability is used for debugging internally 2.Goals of explainability are not clearly defined within organizations 3.Technical limitations make explainability hard to deploy in real-time
  • 16. Findings 1.Explainability is used for debugging internally 2.Goals of explainability are not clearly defined within organizations 3.Technical limitations make explainability hard to deploy in real-time
  • 17. Limitations • Spurious correlations exposed by feature level explanations • No causal underpinnings to the models themselves
  • 18. Limitations (cont.) • Sample importance is computationally infeasible to deploy at scale • Privacy concerns of model inversion exist
  • 19. Findings 1.Explainability is used for debugging internally 2.Goals of explainability are not clearly defined within organizations 3.Technical limitations make explainability hard to deploy in real-time
  • 20. This Talk 1. How are existing approaches to explainability used in practice? 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  • 21. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  • 22. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  • 23. You Shouldn’t Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods Botty Dimanov, Umang Bhatt, Mateja Jamnik, and Adrian Weller To appear at the European Conference on Artificial Intelligence 2020 http://ecai2020.eu/papers/72_paper.pdf
  • 24. Takeaway Feature importance reveals nothing reliable about model fairness.
  • 25. Why do we care? FeatureImportance Feature FeatureImportance Feature Model A Model B
  • 26. Why do we care? FeatureImportance Feature FeatureImportance Feature Model A Model B Unfair Fair
  • 27. Can we manipulate explanations? Modified explanations via adversarial perturbations of inputs • Ghorbani, Abid, and Zou. AAAI 2019 • Dombrowski et al. NeurIPS 2019 • Slack et al. AIES 2019 Control visual explanations via adversarial perturbations of parameters • Heo, Joo, and Moon. NeurIPS 2019 Downgrade explanations via adversarial perturbations of parameters to hide unfairness
  • 28. fθ fθ+δ f : X ↦ Y Explanation g(f, x)j Our Setup Classifier Our Goal ∀i, fθ+δ(x(i) ) ≈ fθ(x(i) ) Desirable Properties Model Similarity ∀i, |g(fθ+δ, x(i) )j | ≪ |g(fθ, x(i) )j |Low Target Feature Attribution
  • 29. Our Method argminδ L′ = L(fθ+δ, x, y) + α n ∇X:,j L(fθ+δ, x, y) p Adversarial Explanation Attack
  • 31. Results (Importance Ranking) fθ fθ+δ Our adversarial explanation attack: 1.Significantly decreases relative importance. 2.Generalizes to test points. 3.Transfers across explanation methods.
  • 32. Findings • Little change in accuracy, but difference in outputs is detectable • Low attribution achieved with respect to multiple explanation methods • High unfairness across multiple fairness metrics (compared to holding feature constant) fθ fθ+δ
  • 33. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? 3. How can we create explainability tools for external stakeholders?
  • 34. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? Not in their current form 3. How can we create explainability tools for external stakeholders?
  • 35. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? Not in their current form 3. How can we create explainability tools for external stakeholders?
  • 36. Machine Learning Explainability for External Stakeholders Umang Bhatt, McKane Andrus, Adrian Weller, and Alice Xiang Convening hosted by CFI, PAI, and IBM Appeared at the ICML 2020 Workshop on Extending Explainable AI: Beyond Deep Models and Classifiers https://arxiv.org/abs/2007.05408
  • 37. Overview • 33 participants from 5 countries • 15 ML experts, 3 designers, 6 legal experts, 9 policymakers • Domain expertise: Finance, Healthcare, Media, and Social Services • Goal: facilitate an inter-stakeholder conversation around explainable machine learning
  • 38. Two Takeaways 1. There is a need for community engagement in the development of explainable machine learning. 2. There are nuances in the deployment of explainable machine learning.
  • 39. Community Engagement 1. In which context will this explanation be used? Does the context change the properties of the explanations we expose? 2. How should the explanation be evaluated? Both quantitatively and qualitatively… 3. Can we prevent data misuse and preferential treatment by involving affected groups in the development process? 4. Can we educate external stakeholders (and data scientists) regarding the functionalities and limitations of explainable machine learning?
  • 40. Deploying Explainability 1. How does uncertainty in the model and introduced by the (potentially approximate) explanation technique affect the resulting explanations? 2. How can stakeholders interact with the resulting explanations? Can explanations be a conduit for interacting with the model? 3. How, if at all, will stakeholder behavior change as a result of the explanation shown? 4. Over time, how will the explanation technique adapt to changes in stakeholder behavior?
  • 41. Two Takeaways 1. There is a need for community engagement in the development of explainable machine learning. 2. There are nuances in the deployment of explainable machine learning.
  • 42. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? Not in their current form 3. How can we create explainability tools for external stakeholders?
  • 43. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? Not in their current form 3. How can we create explainability tools for external stakeholders? Community engagement and thoughtful deployment
  • 44. This Talk 1. How are existing approaches to explainability used in practice? Only used by developers 2. Can existing explainability tools be used to ensure model unfairness? Not in their current form 3. How can we create explainability tools for external stakeholders? Community engagement and thoughtful deployment
  • 45. Challenges in Deploying Explainable Machine Learning Umang Bhatt usb20@cam.ac.uk @umangsbhatt Thanks for listening!