10. Where To Start?
3 Important Questions
What is the problem we
are trying to solve?
Are there business
Understand The Problem..
What does a successful
11. Let The Fun Begin Step.0
Deﬁne The Problem Spaces
People can’t ﬁnd relevant coursework and therefore they
are not starting courses
People ﬁnd relevant courses, but the content of each
course does not meet their expectations
People can’t understand how to use our website, and
that’s why they are not starting courses
12. Great, But What’s Next?
Peel the onion
Is this an ML problem? What is the project value?
What is the expectation
13. Let The Fun Begin Vol. 1
Deﬁne The Hypotheses
Secret #1: Don’t Do It In A Dark Room On Your Own
Involve Subject Matter Experts
(Business, Engineering, Data Science, Product)
Deﬁne hypotheses that can solve the problem
ML is different than traditional software - it is
15. Let The Fun Begin Vol. 3
Deﬁne Your Metrics
Secret #2: Don’t do it in a dark room on your own
Click through rate
› First course completion rate
› Sponsored content click through rate
17. Your First Model Will Fail
Before model experimentation starts, there is uncertainty around how effectively the business
problem can be solved
› Should we expect a 2% lift in click through rate or 20%?
Uncertainty around which data will be predictive
› Example hypothesis: student’s geographic location will help predict relevant courses
› Many hypotheses won’t pan out!
Model experimentation process should be optimized for rapid iteration
› Input data + modeling approach = success?
› Don’t over polish a solution that isn’t guaranteed to succeed
18. How Will I Know If My Model Works?
In A Perfect World, Get Model Predictions In Front Of Real Customers & Measure Business
A/B Testing: split customers between treatment (ML powered) and control (status-quo)
› Large engineering investment (reliable production-ready solution)
› Risk to business (can we afford to see a signiﬁcant drop in business metrics in our test population?)
› Slow and diﬃcult to get conclusive results (if you have a small user base)
Is there a lower cost and lower risk alternative for rapid model evaluation?
Yes! Evaluate model performance on historical data
› Commonly referred to as oﬄine testing or backtesting
19. Estimating Performance On Historical Data
Can you directly measure the business impact metric with different model
approaches on past data?
› If yes, do this!
What if it is infeasible to measure business metric on historical data?
› Are there proxy metrics that are well correlated with the critical business impact metrics?
• Example: Click through rate -> how high did the model rank items that were clicked
Remember to compare performance against a reasonable heuristic!
› Does my model generate more clicks than content sorted in order of general popularity?
Don’t over-optimize “oﬄine” performance. Get in front of real users!
20. Clean, representative, and predictive input (training) data
› Clean: outliers, erroneous data need to be handled with care
› Representative: are the business scenarios where the model
will add value represented in your training set?
› Predictive: talk with SMEs on what data should be sourced
The right model for the right problem
Rules of thumb:
› Tabular data: ensemble decision trees (XGBoost, lightGBM,
› Text/computer vision/audio: open source pre-trained
Model training is less than 10% of the total project effort!
What Does It
Take To Build An
21. Is My Model Ready For The Big Leagues?
Am I reasonably conﬁdent that my
model will add value & do no harm?
Outperforms a heuristic according to
business metric or reasonable proxy metric
Was my methodology of
Always peer review
Is my model able to be reasonably
supported in a production environment?
Overly complex models can be diﬃcult to
productionize and expensive to maintain
If so, SHIP IT!
23. OK, I Have A Model, Now What?
You Need To Figure Out The Best Way To Get It Into Production
How “fresh” does the
model need to be?
What’s the simplest way to get
How often do you need to
Who do you need to
create predictions for?
If you haven’t been talking to the engineers who will use the results, start now
24. Start with batch predictions
Starting With Batch Predictions Allows You To Avoid All This:
Real time feature store
(features = data used by model
to generate predictions)
Model serving code
› Use source control
› Have others review changes
› Be consistent – use naming conventions
› Parameterize for different environments/versions
› Provide a deﬁned interface for your model predictions
› Reduce manual input in pipelines – aim for CI/CD
Software Engineering Teams (Should!) Do This, Copy Them
26. What Could This Look Like?
› Consider data scientist model work (generally Jupyter notebook)*
› Scripts for model data generation, training, batch predictions and storing results
Standardized naming model artifact, folder by environment, training date
Run scripts on regular time schedule or some automated trigger
Prediction results - include model version column
27. How can I save time?
Avoid scope creep
(stick to your deﬁned
Keep it simple
(don’t try and build real-time
out the gate)
Use managed services
(don’t build everything from
Get dedicated staff involved for
end to end support
(project should be
Follow best practices
(use standard model
Use what you already have
(data engineering, service tools)
Avoid analysis paralysis
(beats baseline = start test)
(no research only data
28. What Should I Avoid Cutting Corners On?
Get other teams involved early
(upstream and downstream)
Don’t skip on automation
(both internal for design/code and stakeholders for progress)
Provide an API interface
(did training fail? Did my predictions job run?)
29. Collaboration works wonders → business <> engineering <> data science <> product
Utilizing a structured approach to problem deﬁnition increases the likelihood of success
Even simple models can create impact that gets the business raving