O slideshow foi denunciado.

Rsqrd AI: Recent Advances in Explainable Machine Learning Research

0

Compartilhar

Carregando em…3
×
1 de 35
1 de 35

Mais Conteúdo rRelacionado

Audiolivros relacionados

Gratuito durante 30 dias do Scribd

Ver tudo

Rsqrd AI: Recent Advances in Explainable Machine Learning Research

  1. 1. Recent advances in explainable machine learning research Bernease Herman, Research Scientist University of Washington eScience Institute and Paul G. Allen School of Computer Science & Engineering June 6, 2019 - Allen Institute for Artificial Intelligence
  2. 2. Recent advances in explainable machine learning research Bernease Herman, Research Scientist University of Washington eScience Institute and Paul G. Allen School of Computer Science & Engineering June 6, 2019 - Allen Institute for Artificial Intelligence Speed run edition
  3. 3. eScience Institute University of Washington
  4. 4. Modified graphic. Original from Alice Zheng Model interpretation Data Curation · Evaluation · Interpretation Personal research interests in ML
  5. 5. Modified graphic. Original from Alice Zheng Model interpretation Data Curation · Evaluation · Interpretation How do we choose and collect dataset?
  6. 6. Modified graphic. Original from Alice Zheng Model interpretation Data Curation · Evaluation · Interpretation How to check performance of ML?
  7. 7. Modified graphic. Original from Alice Zheng Model interpretation Data Curation · Evaluation · Interpretation What’s happening on the inside?
  8. 8. Interpretable vs Explainable
  9. 9. Defining model explainability No single definition within community. “component of interpretable modeling process informing how model works in understandable form” - Me “process of giving explanations [of ML] to humans” - Kim & Doshi-Valez 2017 to left, formal definition (I) that extends beyond humans - Dhurandhar et al. 2017
  10. 10. Linear regression models (with a certain number of parameters) Decision trees (or similar) (with a certain depth/number of parameters) Text explanations Visualizations (e.g., saliency maps) more Explanations come in many forms
  11. 11. “Explanation vehicles” (Herman 2017) Explanations come in many forms
  12. 12. from “The Mythos of Model Interpretability”, Lipton 2016 1. Simulatability, comprehend the entire model at once; model complexity 2. Decomposability, comprehend the individual components/parameters; intelligibility [2] 3. Algorithmic transparency, comprehend the algorithm behavior; loss surface and randomness Three levels of transparency
  13. 13. Interpretability methods (with focus on explanations)
  14. 14. Interpretable methods survey heavily borrowed from Kim & Doshi-Valez ICML 2017 tutorial 1. Fitting new models that are intrinsically interpretable 2. Post-hoc analysis of existing model 3. Interpretable analysis of raw data (or model architecture)
  15. 15. Inherently interpretable models 1. Fitting new models that are intrinsically interpretable Decision trees, rule lists, rule sets Generalized linear models (and feature manipulation) Case-based methods Sparsity-based methods Monotonicity-based methods Conceptual and hierarchical models
  16. 16. 1. Fitting new models that are intrinsically interpretable. Decision trees, rule lists, rule sets Generalized linear models (and feature manipulation) table above from Gehrke et al. 2012 Inherently interpretable models
  17. 17. figure above from Gupta et al. 2016 Monotonicity constraints Conceptual and hierarchical models Inherently interpretable models
  18. 18. 2. Post-hoc analysis of existing model Sensitivity analysis Surrogate models Gradient-based methods Hidden layer investigations Post-hoc for existing models
  19. 19. 2. Sensitivity analysis Surrogate models Patrick Hall et al. 2017, O’Reilly Blog Post-hoc for existing models
  20. 20. 2. Post-hoc analysis of existing model. Sensitivity analysis Surrogate models Gradient-based methods Surrogate models Hidden layer investigations Post-hoc for existing models figures from Ribeiro et al. 2015 (LIME)
  21. 21. 2. Post-hoc analysis of existing mode Gradient-based methods Surrogate models Hidden layer investigations Post-hoc for existing models Lundberg & Lee 2017 (SHAP)
  22. 22. 2. Post-hoc analysis of existing model Gradient-based methods Surrogate models Hidden layer investigations Post-hoc for existing models Lundberg et al. (2019)
  23. 23. 2. Sensitivity analysis Surrogate models Gradient-based methods Selvaraju et al. 2017 Post-hoc for existing models
  24. 24. 3. Interpretable analysis of raw data Visualization Variable Importance Partial dependence plots Correlation analysis Interpretable raw data analysis
  25. 25. Visualizations for AutoML Wang et al. (2019)
  26. 26. Do interpretability and explainability methods always work?
  27. 27. Explanations can be persuasive Lipton 2016, Herman 2017 When tailoring our model explanations to human preferences and judges, our models may learn to prioritize persuasive explanations over introspective ones.
  28. 28. Potentially unfaithful explanations Hendricks et al. 2016
  29. 29. Visual saliency explanation methods Adebayo et al. 2018
  30. 30. Even random labels don’t stop them Adebayo et al. 2018
  31. 31. How AI detectives are cracking open the black box of deep learning, Science Magazine, July 2017 Background reading
  32. 32. Ideas on interpreting machine learning, O’Reilly Ideas, March 2017 Background reading
  33. 33. Thank you! Let’s discuss this more. Bernease Herman bernease@uw.edu @bernease on Twitter, Github, MSDSE Slack, everything
  34. 34. Splitting model form from simplicity Herman 2017 Simultaneously coerced into suitable model form (e.g., decision tree) and reduced in complexity (e.g., model size). Difficult to evaluate across complexity preferences.
  35. 35. Splitting model form from simplicity Herman 2017 Keeps model form and reduction of complexity separate. Improves evaluation and adaptability.

×