Predictive models are often used to identify individuals that will likely have escalating health severity in the future and accordingly deliver appropriate interventions. However, for the clinicians and care managers, these predictive models often act as a black-box at an individual level. The reason for this being, typically predictive models use combinations of complicated algorithms that makes it hard to explain the reason behind a predictive model score at an individual level. This talk will focus on model and feature agnostic methodologies and techniques that help uncover the drivers behind a prediction at a personal level in a healthcare setting.
Next DSS MIA Event - https://datascience.salon/miami/
Next DSS AUS Event - https://datascience.salon/austin/
2. Pipeline
DATA
• Demographic
• Medical & Claims
• Pharmacy Claims
• Lab & Test Results
• Welcome Calls
• Health Programs
• Consumer Data
PREDICTOR CATEGORIES
Demographics Clinical
Behavioral
Medication
E.g., Age, sex
disability
E.g., chronic
conditions, BH,
hospitalization,
screenings
Lifestyle, health
programs
E.g., asthma,
diabetes, heart
failure, BH
MODELING
CHOICES
BH
Severity
Predictive
Model
Data Extraction Feature Engineering and
Transformation
Over 4,000 potential features were
used for development.
Model Development
• Selection of the most important
predictors
• Different modeling techniques and
combinations explored
1 2 3
• Branching
• Linear
Regression
• Decision Trees
• Neural Networks
• Least Angle
Regression
• Ensemble
3. Application
The BH severity score can be paired with a
medical score (Charlson Comorbidity Index)
to create an analytic framework called the
Behavioral Health Quadrants, allowing:
• Better understanding of the relative
severity of medical & behavioral health
in an individual
• Greater ability to direct individuals to
the right level of care
28% of Medicare Advantage participants
had sufficient BH utilization to be assigned
to a BH quadrant.
Less Medical Health Risk/Complexity More
Severe Med +
BH At Risk
Severe BH +
Severe Med
Severe BH +
Mild-Moderate
Med
BH At Risk +
Mild-Moderate
Med
LessBehavioralHealth
Risk/ComplexityMore
*Adapted from Four Quadrant Clinical Integration Model (National
Council for Behavioral Health). Values as of Jan 2016. Based on
membership as of Dec 2015 and claims Jan 2015 – Dec 2015.
Medicare Advantage BH Quadrants*
4. Agenda
01 | An Example
02 | Why are interpretable models necessary?
03 | Taxonomy of interpretable models
04 | LIME Intuition
05 | LIME Recipe
06 | Pros and Cons
07 | Humana Use Cases
4
5. An Example: Husky vs Wolf Classifier
Husky
• References to the above picture are at the end of the presentation
• https://arxiv.org/pdf/1602.04938.pdf
Wolf
?
6. Click to edit Master title style
Human Curiosity
Satisfy the natural human curiosity to find explanations
for unexpected events
Trust
Build enough trust in the model so that it is
accepted and used in the real-world
Detecting Bias
Unethical discrimination of models against certain
factors
Identify leaks
High accuracy can sometimes be due to leakage
from the target
Prescriptive Analytics
Model explanations drive/prescribe follow up
actions items
Audit & Improve
Explanations serve as a debugging tool to
continuously audit and improve the model
Why are interpretable models necessary?
https://christophm.github.io/interpretable-ml-book/lime.html
https://arxiv.org/pdf/1602.04938v1.pdf
7. Taxonomy of interpretable models
Intrinsic vs Post-hoc
Model-Specific vs Model Agnostic
Local vs Global
https://christophm.github.io/interpretable-ml-book/lime.html
https://arxiv.org/pdf/1602.04938v1.pdf
8. LIME : Intuition
The key intuition behind LIME approach is that it is much easier to approximate a black-box model by a simple
model locally (in the neighborhood of the prediction we want to explain), as opposed to trying to approximate
a model globally.
https://christophm.github.io/interpretable-ml-book/lime.html
https://arxiv.org/pdf/1602.04938v1.pdf
9. LIME Recipe
1
• Select your instance of interest for which you want to have an explanation of its black box prediction
2
• Perturb your dataset
3
• Get the black box predictions for these new points
4
• Weight the new samples according to their proximity to the instance of interest
5
• Train a weighted, interpretable model on the dataset with the variations
6
• Explain the prediction by interpreting the local model
M
X
Y
Simple Linear Model
Decision Tree
TOP
DRIVERS
W
https://christophm.github.io/interpretable-ml-book/lime.html
https://arxiv.org/pdf/1602.04938v1.pdf
11. Smoothen
Outreach
Humana Clinicians are equipped
with talking points that are top
drivers of our members’ health –
smoothing an initial outreach
Produce the 5 most clinically-relevant health drivers per member and display for the clinician at time of referral
Identify Top
Barriers
Interpretable models help
identify the top barriers that
impede a person’s Disease
Specific Best Practices
Focused
Interventions
Identifying the barriers helps
clinicians focus on addressing
important and relevant health
needs
Humana Use Cases