O slideshow foi denunciado.

Rsqrd AI: A Survey of The Current Ecosystem of Explainability Techniques

0

Compartilhar

Próximos SlideShares
Scary numbers
Scary numbers
Carregando em…3
×
1 de 17
1 de 17

Rsqrd AI: A Survey of The Current Ecosystem of Explainability Techniques

0

Compartilhar

Baixar para ler offline

In this talk, Coco Sack talks about current explainability techniques in the field.

Presented on 08/21/2019

**These slides are from a talk given at Rsqrd AI. Learn more at rsqrdai.org**

In this talk, Coco Sack talks about current explainability techniques in the field.

Presented on 08/21/2019

**These slides are from a talk given at Rsqrd AI. Learn more at rsqrdai.org**

Mais Conteúdo rRelacionado

Audiolivros relacionados

Gratuito durante 30 dias do Scribd

Ver tudo

Rsqrd AI: A Survey of The Current Ecosystem of Explainability Techniques

  1. 1. DECIDING IN THE DARK:A Survey of The Current Ecosystem of Explainability Techniques By Coco Sack 1
  2. 2. WHO AM I?  AI2 Incubator Intern  Yale freshman (Computer Science & Psychology major)  Child of AI Boom  As of recent, an “Explainability Expert” 2
  3. 3. 3
  4. 4. WHO CARES ABOUT EXPLAINABILITY? DATA-SCIENTISTS To debug... POLITICIANS GDPR: “a right to explanation” To protect the public... EXECUTIVES 76% said lack of transparency was seriously impeding adoption To sell... CONSUMERS To trust... 4
  5. 5. EXPLAINABILITY: A HOT TOPIC 5
  6. 6. ??? 6
  7. 7. 77
  8. 8. 88
  9. 9. DATA ANALYSIS ◂ Data must be complete, unique, credible, accurate, consistent, and unbiased ◂ Errors or duplicates can undermine the model’s performance ◂ Outdated bias in the training data can re-entrench discrimination. You are saying, 'Here's the data, figure out what the behavior is.' That is an inherently fuzzy and statistical approach. The real challenge of deep learning is that it's not modeling, necessarily, the world around it. It's modeling the data it's getting. And that modeling often includes bias and problematic correlations.” -- Sheldon Fernandez, CEO of DarwinAI 9
  10. 10. INTRINSICALLY INTERPRETABLE MODELS 1. Regression 2. Additive 3. Tree Graphs 4. Decision Rules 10
  11. 11. OUTPUT ANALYSIS TECHNIQUES 11
  12. 12. WHAT KIND OF MODELS DO THEY WORK ON? 12
  13. 13. WHAT KIND OF MODELS DO THEY WORK ON? 13
  14. 14. WHAT DOES IT OUTPUT? 14
  15. 15. WHAT DOES IT OUTPUT? 15
  16. 16. LOCAL VS. GLOBAL? 16
  17. 17. OTHER EXPLAINABILITY RESOURCES General Surveys of Techniques: A Survey of Methods For Explaining Black Box Models (2018) Visual Analytics in Deep Learning (2018) Explaining Explanations (2019) Peeking Inside the Black Box (2018) Books and Presentations: Interpretable Machine Learning (by Christopher Molnar, 2019) XAI (by Dave Gunning, 2017) Explaining Explanations (2019) Unique Technical Papers: Generative Synthesis (Wong et al.) Golden Eye++ (Hendricks et al.) Grad-CAM (Selvaraju et al.) DeepLift (Shrikumar et al.) T-CAV (Kim et al.) Ethical/Political Reports: “Computer Says No” (by Ian Sample) “Why We Need to Open the Black Box” (by AJ Abdallat) “The Importance of Interpretable Machine learning” (by DJ Sakar) 17

Notas do Editor


  • INTRINSICALLY INTERPRETABLE: the most straightforward way to achieve interpretability is to design an algorithm or model that is intrinsically interpretable meaning it is naturally human interpretable due to its simple and intuitive structures.

    DEEP-EXPLANATION: altering deep learning models to create secondary systems that are trained to generate explanations of the first.

    OUTPUT ANALYSIS: Finally output-focused methods can analyze an opaque machine learning model after it is trained based on its outputs. These techniques are “post-hoc,” meaning they are applied after training a black-box model making them more general, flexible, and applicable. Some of these techniques are specifically designed for certain kinds of models while others are truly agnostic and can be applied to any machine learning model.
  • ×