Bayesian Optimization with a Prior for the Optimum

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Artur Souza
  • Luigi Nardi
  • Leonardo B. Oliveira
  • Kunle Olukotun
  • Marius Lindauer
  • Frank Hutter

External Research Organisations

  • Universidade Federal de Minas Gerais
  • Lund University
  • Stanford University
  • University of Freiburg
  • Bosch Center for Artificial Intelligence (BCAI)
View graph of relations

Details

Original languageEnglish
Title of host publicationMachine Learning and Knowledge Discovery in Databases. Research Track
Subtitle of host publicationEuropean Conference, ECML PKDD 2021, Proceedings
EditorsNuria Oliver, Fernando Pérez-Cruz, Stefan Kramer, Jesse Read, Jose A. Lozano
Place of PublicationCham
PublisherSpringer Nature Switzerland AG
Pages265-296
Number of pages32
Volume3
ISBN (electronic)978-3-030-86523-8
ISBN (print)9783030865221
Publication statusPublished - 2021
EventEuropean Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2021 - Bilbao, Spain
Duration: 13 Sept 202117 Sept 2021

Publication series

NameLecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)
Volume12977
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Abstract

While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on bad design choices (e.g., machine learning hyperparameters) that the expert already knows to work poorly. To address this issue, we introduce Bayesian Optimization with a Prior for the Optimum (BOPrO). BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO’s standard priors over functions, which are much less intuitive for users. BOPrO then combines these priors with BO’s standard probabilistic model to form a pseudo-posterior used to select which points to evaluate next. We show that BOPrO is around 6.67 × faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application. We also show that BOPrO converges faster even if the priors for the optimum are not entirely accurate and that it robustly recovers from misleading priors.

ASJC Scopus subject areas

Cite this

Bayesian Optimization with a Prior for the Optimum. / Souza, Artur; Nardi, Luigi; Oliveira, Leonardo B. et al.
Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Proceedings. ed. / Nuria Oliver; Fernando Pérez-Cruz; Stefan Kramer; Jesse Read; Jose A. Lozano. Vol. 3 Cham: Springer Nature Switzerland AG, 2021. p. 265-296 (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science); Vol. 12977).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Souza, A, Nardi, L, Oliveira, LB, Olukotun, K, Lindauer, M & Hutter, F 2021, Bayesian Optimization with a Prior for the Optimum. in N Oliver, F Pérez-Cruz, S Kramer, J Read & JA Lozano (eds), Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Proceedings. vol. 3, Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science), vol. 12977, Springer Nature Switzerland AG, Cham, pp. 265-296, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2021, Bilbao, Spain, 13 Sept 2021. https://doi.org/10.1007/978-3-030-86523-8_17
Souza, A., Nardi, L., Oliveira, L. B., Olukotun, K., Lindauer, M., & Hutter, F. (2021). Bayesian Optimization with a Prior for the Optimum. In N. Oliver, F. Pérez-Cruz, S. Kramer, J. Read, & J. A. Lozano (Eds.), Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Proceedings (Vol. 3, pp. 265-296). (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science); Vol. 12977). Springer Nature Switzerland AG. https://doi.org/10.1007/978-3-030-86523-8_17
Souza A, Nardi L, Oliveira LB, Olukotun K, Lindauer M, Hutter F. Bayesian Optimization with a Prior for the Optimum. In Oliver N, Pérez-Cruz F, Kramer S, Read J, Lozano JA, editors, Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Proceedings. Vol. 3. Cham: Springer Nature Switzerland AG. 2021. p. 265-296. (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)). Epub 2021 Sept 11. doi: 10.1007/978-3-030-86523-8_17
Souza, Artur ; Nardi, Luigi ; Oliveira, Leonardo B. et al. / Bayesian Optimization with a Prior for the Optimum. Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Proceedings. editor / Nuria Oliver ; Fernando Pérez-Cruz ; Stefan Kramer ; Jesse Read ; Jose A. Lozano. Vol. 3 Cham : Springer Nature Switzerland AG, 2021. pp. 265-296 (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)).
Download
@inproceedings{ad4aa5a5fab14cbda53f5f52c69f11ed,
title = "Bayesian Optimization with a Prior for the Optimum",
abstract = "While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on bad design choices (e.g., machine learning hyperparameters) that the expert already knows to work poorly. To address this issue, we introduce Bayesian Optimization with a Prior for the Optimum (BOPrO). BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO{\textquoteright}s standard priors over functions, which are much less intuitive for users. BOPrO then combines these priors with BO{\textquoteright}s standard probabilistic model to form a pseudo-posterior used to select which points to evaluate next. We show that BOPrO is around 6.67 × faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application. We also show that BOPrO converges faster even if the priors for the optimum are not entirely accurate and that it robustly recovers from misleading priors.",
author = "Artur Souza and Luigi Nardi and Oliveira, {Leonardo B.} and Kunle Olukotun and Marius Lindauer and Frank Hutter",
note = "Funding Information: and Kunle Olukotun were supported in part by affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Intel, Microsoft, NEC, SAP, Teradata, and VMware. Luigi Nardi was also partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Artur Souza and Leonardo B. Oliveira were supported by CAPES, CNPq, and FAPEMIG. Frank Hutter acknowledges support by the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme through grant no. 716721. The computations were also enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC partially funded by the Swedish Research Council through grant agreement no. 2018-05973.; European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2021 ; Conference date: 13-09-2021 Through 17-09-2021",
year = "2021",
doi = "10.1007/978-3-030-86523-8_17",
language = "English",
isbn = "9783030865221",
volume = "3",
series = "Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)",
publisher = "Springer Nature Switzerland AG",
pages = "265--296",
editor = "Nuria Oliver and Fernando P{\'e}rez-Cruz and Stefan Kramer and Jesse Read and Lozano, {Jose A.}",
booktitle = "Machine Learning and Knowledge Discovery in Databases. Research Track",
address = "Switzerland",

}

Download

TY - GEN

T1 - Bayesian Optimization with a Prior for the Optimum

AU - Souza, Artur

AU - Nardi, Luigi

AU - Oliveira, Leonardo B.

AU - Olukotun, Kunle

AU - Lindauer, Marius

AU - Hutter, Frank

N1 - Funding Information: and Kunle Olukotun were supported in part by affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Intel, Microsoft, NEC, SAP, Teradata, and VMware. Luigi Nardi was also partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Artur Souza and Leonardo B. Oliveira were supported by CAPES, CNPq, and FAPEMIG. Frank Hutter acknowledges support by the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme through grant no. 716721. The computations were also enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC partially funded by the Swedish Research Council through grant agreement no. 2018-05973.

PY - 2021

Y1 - 2021

N2 - While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on bad design choices (e.g., machine learning hyperparameters) that the expert already knows to work poorly. To address this issue, we introduce Bayesian Optimization with a Prior for the Optimum (BOPrO). BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO’s standard priors over functions, which are much less intuitive for users. BOPrO then combines these priors with BO’s standard probabilistic model to form a pseudo-posterior used to select which points to evaluate next. We show that BOPrO is around 6.67 × faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application. We also show that BOPrO converges faster even if the priors for the optimum are not entirely accurate and that it robustly recovers from misleading priors.

AB - While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on bad design choices (e.g., machine learning hyperparameters) that the expert already knows to work poorly. To address this issue, we introduce Bayesian Optimization with a Prior for the Optimum (BOPrO). BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO’s standard priors over functions, which are much less intuitive for users. BOPrO then combines these priors with BO’s standard probabilistic model to form a pseudo-posterior used to select which points to evaluate next. We show that BOPrO is around 6.67 × faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application. We also show that BOPrO converges faster even if the priors for the optimum are not entirely accurate and that it robustly recovers from misleading priors.

UR - http://www.scopus.com/inward/record.url?scp=85115712403&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-86523-8_17

DO - 10.1007/978-3-030-86523-8_17

M3 - Conference contribution

AN - SCOPUS:85115712403

SN - 9783030865221

VL - 3

T3 - Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)

SP - 265

EP - 296

BT - Machine Learning and Knowledge Discovery in Databases. Research Track

A2 - Oliver, Nuria

A2 - Pérez-Cruz, Fernando

A2 - Kramer, Stefan

A2 - Read, Jesse

A2 - Lozano, Jose A.

PB - Springer Nature Switzerland AG

CY - Cham

T2 - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2021

Y2 - 13 September 2021 through 17 September 2021

ER -

By the same author(s)