Prior-guided Bayesian Optimization

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Artur Souza
  • Luigi Nardi
  • Leonardo B. Oliveira
  • Kunle Olukotun
  • Marius Lindauer
  • Frank Hutter

Externe Organisationen

  • Universidade Federal de Minas Gerais
  • Lund University
  • Stanford University
  • Albert-Ludwigs-Universität Freiburg
  • Robert Bosch GmbH
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksMachine Learning and Knowledge Discovery in Databases. Research Track
UntertitelEuropean Conference, ECML PKDD 2021
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 2021

Abstract

While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on commonly known bad regions of design choices, e.g., hyperparameters of a machine learning algorithm. To address this issue, we introduce Prior-guided Bayesian Optimization (PrBO). PrBO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO's standard priors over functions which are much less intuitive for users. PrBO then combines these priors with BO's standard probabilistic model to yield a posterior. We show that PrBO is more sample efficient than state-of-the-art methods without user priors and 10,000\(\times\) faster than random search, on a common suite of benchmarks and a real-world hardware design application. We also show that PrBO converges faster even if the user priors are not entirely accurate and that it robustly recovers from misleading priors.

Zitieren

Prior-guided Bayesian Optimization. / Souza, Artur; Nardi, Luigi; Oliveira, Leonardo B. et al.
Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021. 2021.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Souza, A, Nardi, L, Oliveira, LB, Olukotun, K, Lindauer, M & Hutter, F 2021, Prior-guided Bayesian Optimization. in Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021. <https://arxiv.org/pdf/2006.14608>
Souza, A., Nardi, L., Oliveira, L. B., Olukotun, K., Lindauer, M., & Hutter, F. (2021). Prior-guided Bayesian Optimization. In Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021 Vorabveröffentlichung online. https://arxiv.org/pdf/2006.14608
Souza A, Nardi L, Oliveira LB, Olukotun K, Lindauer M, Hutter F. Prior-guided Bayesian Optimization. in Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021. 2021 Epub 2021.
Souza, Artur ; Nardi, Luigi ; Oliveira, Leonardo B. et al. / Prior-guided Bayesian Optimization. Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021. 2021.
Download
@inproceedings{c7fc284895324b0ab9b3093c248b643b,
title = "Prior-guided Bayesian Optimization",
abstract = " While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on commonly known bad regions of design choices, e.g., hyperparameters of a machine learning algorithm. To address this issue, we introduce Prior-guided Bayesian Optimization (PrBO). PrBO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO's standard priors over functions which are much less intuitive for users. PrBO then combines these priors with BO's standard probabilistic model to yield a posterior. We show that PrBO is more sample efficient than state-of-the-art methods without user priors and 10,000\(\times\) faster than random search, on a common suite of benchmarks and a real-world hardware design application. We also show that PrBO converges faster even if the user priors are not entirely accurate and that it robustly recovers from misleading priors. ",
keywords = "cs.LG, stat.ML",
author = "Artur Souza and Luigi Nardi and Oliveira, {Leonardo B.} and Kunle Olukotun and Marius Lindauer and Frank Hutter",
year = "2021",
language = "English",
booktitle = "Machine Learning and Knowledge Discovery in Databases. Research Track",

}

Download

TY - GEN

T1 - Prior-guided Bayesian Optimization

AU - Souza, Artur

AU - Nardi, Luigi

AU - Oliveira, Leonardo B.

AU - Olukotun, Kunle

AU - Lindauer, Marius

AU - Hutter, Frank

PY - 2021

Y1 - 2021

N2 - While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on commonly known bad regions of design choices, e.g., hyperparameters of a machine learning algorithm. To address this issue, we introduce Prior-guided Bayesian Optimization (PrBO). PrBO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO's standard priors over functions which are much less intuitive for users. PrBO then combines these priors with BO's standard probabilistic model to yield a posterior. We show that PrBO is more sample efficient than state-of-the-art methods without user priors and 10,000\(\times\) faster than random search, on a common suite of benchmarks and a real-world hardware design application. We also show that PrBO converges faster even if the user priors are not entirely accurate and that it robustly recovers from misleading priors.

AB - While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on commonly known bad regions of design choices, e.g., hyperparameters of a machine learning algorithm. To address this issue, we introduce Prior-guided Bayesian Optimization (PrBO). PrBO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO's standard priors over functions which are much less intuitive for users. PrBO then combines these priors with BO's standard probabilistic model to yield a posterior. We show that PrBO is more sample efficient than state-of-the-art methods without user priors and 10,000\(\times\) faster than random search, on a common suite of benchmarks and a real-world hardware design application. We also show that PrBO converges faster even if the user priors are not entirely accurate and that it robustly recovers from misleading priors.

KW - cs.LG

KW - stat.ML

M3 - Conference contribution

BT - Machine Learning and Knowledge Discovery in Databases. Research Track

ER -

Von denselben Autoren