This document summarizes a presentation given by Michael McCourt of SigOpt at the 2nd Annual Workshop on Offline and Online Evaluation of Interactive Systems at KDD 2019. The presentation discusses using Bayesian optimization to efficiently explore the tradeoffs between competing offline metrics. It proposes posing the problem as a constrained multi-objective optimization to explore the Pareto efficient frontier. It describes allowing users to interactively update the constraints as the search progresses to account for changes in their goals. Various strategies for exploring the efficient frontier are discussed, including randomly or systematically varying the constraints. Applications of Bayesian optimization are highlighted, including for materials and model design. Future work directions are proposed, such as better handling black-box constraints.
Interactive Tradeoffs Between Competing Metrics with Bayesian Optimization
1. SigOpt. Confidential.
Interactive Tradeoffs Between
Competing Offline Metrics with
Bayesian Optimization
KDD 2019
2nd Annual Workshop
Online and Offline Evaluation of Interactive Systems
Michael McCourt, Research Engineer, SigOpt
2. SigOpt. Confidential.
About me
● Research engineering at SigOpt
● Focus on applied Bayesian optimization
● PhD from Cornell
● Avid Cleveland Cavaliers fan
About SigOpt
● Leading software solution for parameter
optimization and model experimentation
● Customers in finance, trading, media,
technology, consulting, energy, industry
● Free version of our solution for academia
available at sigopt.com/edu
3. SigOpt. Confidential.
Abstract for KDD 2019
2nd Annual Workshop on Offline and Online Evaluation of Interactive Systems
Many real world applications (ML models, simulators, etc.) have multiple competing
metrics that define performance; these require practitioners to carefully consider
potential tradeoffs. However, assessing and ranking this tradeoff is nontrivial,
especially when the number of metrics is more than two. Often times, practitioners
scalarize the metrics into a single objective, e.g., using a weighted sum.
In this talk, we pose this problem as a constrained multi-objective optimization
problem. By setting and updating the constraints, we can efficiently explore only the
region of the Pareto efficient frontier of the model/system of most interest. We
motivate this problem with the application of an experimental design setting, where
we are trying to fabricate high performance glass substrate for solar cell panels.
3
4. SigOpt. Confidential.
Most Metrics are Impacted by Free Parameters
How can these free parameters be chosen?
Generally, these are chosen to yield good future performance.
• This discussion only covers offline metrics.
• Some of the elements apply in an online setting as well.
Given a computable metric defining future performance, a search can be conducted for the free parameters
yielding acceptable/optimal performance.
• In many circumstances, evaluating this performance metric is costly.
• Example: Train a classification model and evaluate a validation accuracy.
• Example: Use financial data from the past year for a trading strategy and evaluate its profit on last
month’s data.
4
5. SigOpt. Confidential.
Searching for Free Parameters Requires Efficiency
Intelligently searching a fixed domain
Many searches benefit from efficiently (actively) learning about the circumstances of the search.
• Active learning -- “Active learning is closely related to experimental design … is most often adaptive …
employs an oracle for data labelling … is usually used to learn a model for classification.” -- [Brochu et al
2010]
Two adjacent fields of research have evolved.
• Bayesian optimization -- “Bayesian optimization is a sequential model-based approach to [optimizing a
function].” -- [Shahriari et al, 2016]
• Active search -- “Active search is an active learning setting with the goal of identifying as many
members of a given class as possible under a labeling budget.” -- [Jiang et al, 2017]
How we conduct this active learning will greatly impact efficiency of the search.
5
6. SigOpt. Confidential.
Bayesian Optimization
A graphical depiction of the iterative process
6
Build a statistical model Build a statistical model
Choose a next point Choose a next point
7. SigOpt. Confidential.
Bayesian Optimization
Efficiently Optimize a Scalar Function
To quote [Frazier 2018]: Bayesian optimization (BayesOpt/BO) is a class of machine-learning-based
optimization methods focused on [maximizing/minimizing a function with] the following properties ...
• Typically the dimension d is less than 20.
• The objective function f is continuous, as is the domain (which is likely a d-dimensional rectangle).
• f is expensive to evaluate: e.g., time, money, access
• f is black-box: if lacks known special structure like concavity or linearity.
• When we evaluate f, we observe only f(x); that is, the optimization is gradient-free.
• f is often observed in the presence of noise.
• Our focus is on finding a global rather than local optimum.
7
8. SigOpt. Confidential.
Many Metrics may Contribute to Success
How do we execute under these ambiguous circumstances?
Defining/measuring future performance is imprecise.
• Generally, several metrics will contribute to a sense of future performance.
• Not all metrics are equally important -- some may only need to reach a threshold.
• The feasible performance and preferred interaction between metrics may not be known a priori.
Popular multiobjective optimization strategies are often population-based (and not often sample-efficient).
8
9. SigOpt. Confidential.
Multiobjective Bayesian Optimization
Balancing competing metrics to find the Pareto frontier
Adaptations to BO to search for the efficient frontier:
• Change the problem to an active search problem [Jiang et al, 2018].
• Search for diverse points near the efficient frontier.
• Scalarize the problem with linear combinations of the metrics [Knowles, 2006].
• Define a hypervolume based acquisition function [Hernandez-Lobato et al, 2016, Emmerich et al, 2016].
• Scalarization through prior beliefs [Astudillo, 2017].
Guiding points
• Users wanted to interactively update the search process.
• Users felt uncomfortable stating a priori preferences.
9
10. SigOpt. Confidential.
Multiobjective Bayesian Optimization
Our strategy
We apply a strategy similar to what was discussed in [Letham et al, 2019].
1. Model all metrics independently.
• Requires no prior beliefs on how metrics interact.
• Missing data removed on a per metric basis if unrecorded.
2. Expose the efficient frontier through constrained scalar optimization.
• Enforce user constraints when given.
• Iterate through sub constraints to better resolve efficient frontier, if desired.
• Consider different regions of the frontier when parallelism is possible.
3. Allow users to change constraints as the search progresses.
• Allow the problems/goals to evolve as the user’s understanding evolves.
10
Variation on
Expected
Improvement
11. SigOpt. Confidential.
One strategy can be to randomly apply constraints.
Multiobjective Bayesian Optimization
Our strategy
11
Note: There are GIFs that do not
show up in this version of the
presentation. For a copy that
includes them, please email
contact@sigopt.com
12. SigOpt. Confidential.
Another strategy can be to “walk” up and down the constraint domain.
Multiobjective Bayesian Optimization
Our strategy
12
Note: There are GIFs that do not
show up in this version of the
presentation. For a copy that
includes them, please email
contact@sigopt.com
13. SigOpt. Confidential.
It can help to alternate which metric the constraint is imposed on.
Multiobjective Bayesian Optimization
Our strategy
13
Note: There are GIFs that do not
show up in this version of the
presentation. For a copy that
includes them, please email
contact@sigopt.com
14. SigOpt. Confidential.
Users can enforce their own bounds to focus on the desired outcome.
Multiobjective Bayesian Optimization
Our strategy
14
Note: There are GIFs that do not
show up in this version of the
presentation. For a copy that
includes them, please email
contact@sigopt.com
15. SigOpt. Confidential.
Users can also update their own bounds as the experiment goes on.
Multiobjective Bayesian Optimization
Our strategy
15
Note: There are GIFs that do not
show up in this version of the
presentation. For a copy that
includes them, please email
contact@sigopt.com
16. SigOpt. Confidential.
Awesome Applications of Bayesian Optimization
Who is using, and can use, BO?
16
● ML/DL hyperparameter tuning [Snoek et al, 2012; Feurer et al, 2015; Kandasamy et al, 2018]
● Engineering system design [Mockus, 1989; Jones et al, 1998; Forrester et al, 2008]
● Drug design [Negoescu et al, 2011; Frazier and Wang, 2016]
● Material design [Packwood, 2017; Haghanifar et al, 2019]
● Model calibration [Shoemaker et al, 2007; Shi et al, 2013; Letham et al. 2019]
● Reinforcement learning [Lizotte, 2008; Brochu et al, 2010; Martinez-Cantin et al, 2018]
There are so many others!
17. SigOpt. Confidential.17
A Joint Collaboration with University of Pittsburgh
[Haghanifar et al, 2019]
Metrics
• Light transmission
• Clarity (low haze)
• Water resistance
Constraints updated on all
metrics during the search.
Note: There is a video that
does not show up in this
version of the presentation.
For a copy that includes this,
please email
contact@sigopt.com
18. SigOpt. Confidential.
Future Work
How can we improve this process?
When black-box constraints exist, how can we encourage our search to respect them?
• Hallucinate bad function values at points which violate the constraints.
• Attenuate the expected improvement by the probability of failure [Gelbart, 2015].
• Model the constraints and average out the noisy behavior [Letham et al, 2019].
• Model the Lagrangian [Picheny et al, 2016].
Question: Exactly how black-box/expensive are these constraints (or the objective)?
• We can adapt to expensive constraints but a cheap objective [Gramacy et al, 2106].
Question: Can we help focus on the important region using preferences?
• Joint work extending [Astudillo, 2017] with Raul and Peter.
18