Bayesian Optimization with a Prior for the Optimum

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Artur Souza
  • Luigi Nardi
  • Leonardo B. Oliveira
  • Kunle Olukotun
  • Marius Lindauer
  • Frank Hutter

Externe Organisationen

  • Universidade Federal de Minas Gerais
  • Lund University
  • Stanford University
  • Albert-Ludwigs-Universität Freiburg
  • Bosch Center for Artificial Intelligence (BCAI)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksMachine Learning and Knowledge Discovery in Databases. Research Track
UntertitelEuropean Conference, ECML PKDD 2021, Proceedings
Herausgeber/-innenNuria Oliver, Fernando Pérez-Cruz, Stefan Kramer, Jesse Read, Jose A. Lozano
ErscheinungsortCham
Herausgeber (Verlag)Springer Nature Switzerland AG
Seiten265-296
Seitenumfang32
Band3
ISBN (elektronisch)978-3-030-86523-8
ISBN (Print)9783030865221
PublikationsstatusVeröffentlicht - 2021
VeranstaltungEuropean Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2021 - Bilbao, Spanien
Dauer: 13 Sept. 202117 Sept. 2021

Publikationsreihe

NameLecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)
Band12977
ISSN (Print)0302-9743
ISSN (elektronisch)1611-3349

Abstract

While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on bad design choices (e.g., machine learning hyperparameters) that the expert already knows to work poorly. To address this issue, we introduce Bayesian Optimization with a Prior for the Optimum (BOPrO). BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO’s standard priors over functions, which are much less intuitive for users. BOPrO then combines these priors with BO’s standard probabilistic model to form a pseudo-posterior used to select which points to evaluate next. We show that BOPrO is around 6.67 × faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application. We also show that BOPrO converges faster even if the priors for the optimum are not entirely accurate and that it robustly recovers from misleading priors.

ASJC Scopus Sachgebiete

Zitieren

Bayesian Optimization with a Prior for the Optimum. / Souza, Artur; Nardi, Luigi; Oliveira, Leonardo B. et al.
Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Proceedings. Hrsg. / Nuria Oliver; Fernando Pérez-Cruz; Stefan Kramer; Jesse Read; Jose A. Lozano. Band 3 Cham: Springer Nature Switzerland AG, 2021. S. 265-296 (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science); Band 12977).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Souza, A, Nardi, L, Oliveira, LB, Olukotun, K, Lindauer, M & Hutter, F 2021, Bayesian Optimization with a Prior for the Optimum. in N Oliver, F Pérez-Cruz, S Kramer, J Read & JA Lozano (Hrsg.), Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Proceedings. Bd. 3, Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science), Bd. 12977, Springer Nature Switzerland AG, Cham, S. 265-296, European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2021, Bilbao, Spanien, 13 Sept. 2021. https://doi.org/10.1007/978-3-030-86523-8_17
Souza, A., Nardi, L., Oliveira, L. B., Olukotun, K., Lindauer, M., & Hutter, F. (2021). Bayesian Optimization with a Prior for the Optimum. In N. Oliver, F. Pérez-Cruz, S. Kramer, J. Read, & J. A. Lozano (Hrsg.), Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Proceedings (Band 3, S. 265-296). (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science); Band 12977). Springer Nature Switzerland AG. https://doi.org/10.1007/978-3-030-86523-8_17
Souza A, Nardi L, Oliveira LB, Olukotun K, Lindauer M, Hutter F. Bayesian Optimization with a Prior for the Optimum. in Oliver N, Pérez-Cruz F, Kramer S, Read J, Lozano JA, Hrsg., Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Proceedings. Band 3. Cham: Springer Nature Switzerland AG. 2021. S. 265-296. (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)). Epub 2021 Sep 11. doi: 10.1007/978-3-030-86523-8_17
Souza, Artur ; Nardi, Luigi ; Oliveira, Leonardo B. et al. / Bayesian Optimization with a Prior for the Optimum. Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Proceedings. Hrsg. / Nuria Oliver ; Fernando Pérez-Cruz ; Stefan Kramer ; Jesse Read ; Jose A. Lozano. Band 3 Cham : Springer Nature Switzerland AG, 2021. S. 265-296 (Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)).
Download
@inproceedings{ad4aa5a5fab14cbda53f5f52c69f11ed,
title = "Bayesian Optimization with a Prior for the Optimum",
abstract = "While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on bad design choices (e.g., machine learning hyperparameters) that the expert already knows to work poorly. To address this issue, we introduce Bayesian Optimization with a Prior for the Optimum (BOPrO). BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO{\textquoteright}s standard priors over functions, which are much less intuitive for users. BOPrO then combines these priors with BO{\textquoteright}s standard probabilistic model to form a pseudo-posterior used to select which points to evaluate next. We show that BOPrO is around 6.67 × faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application. We also show that BOPrO converges faster even if the priors for the optimum are not entirely accurate and that it robustly recovers from misleading priors.",
author = "Artur Souza and Luigi Nardi and Oliveira, {Leonardo B.} and Kunle Olukotun and Marius Lindauer and Frank Hutter",
note = "Funding Information: and Kunle Olukotun were supported in part by affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Intel, Microsoft, NEC, SAP, Teradata, and VMware. Luigi Nardi was also partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Artur Souza and Leonardo B. Oliveira were supported by CAPES, CNPq, and FAPEMIG. Frank Hutter acknowledges support by the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme through grant no. 716721. The computations were also enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC partially funded by the Swedish Research Council through grant agreement no. 2018-05973.; European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2021 ; Conference date: 13-09-2021 Through 17-09-2021",
year = "2021",
doi = "10.1007/978-3-030-86523-8_17",
language = "English",
isbn = "9783030865221",
volume = "3",
series = "Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)",
publisher = "Springer Nature Switzerland AG",
pages = "265--296",
editor = "Nuria Oliver and Fernando P{\'e}rez-Cruz and Stefan Kramer and Jesse Read and Lozano, {Jose A.}",
booktitle = "Machine Learning and Knowledge Discovery in Databases. Research Track",
address = "Switzerland",

}

Download

TY - GEN

T1 - Bayesian Optimization with a Prior for the Optimum

AU - Souza, Artur

AU - Nardi, Luigi

AU - Oliveira, Leonardo B.

AU - Olukotun, Kunle

AU - Lindauer, Marius

AU - Hutter, Frank

N1 - Funding Information: and Kunle Olukotun were supported in part by affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Intel, Microsoft, NEC, SAP, Teradata, and VMware. Luigi Nardi was also partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. Artur Souza and Leonardo B. Oliveira were supported by CAPES, CNPq, and FAPEMIG. Frank Hutter acknowledges support by the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme through grant no. 716721. The computations were also enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at LUNARC partially funded by the Swedish Research Council through grant agreement no. 2018-05973.

PY - 2021

Y1 - 2021

N2 - While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on bad design choices (e.g., machine learning hyperparameters) that the expert already knows to work poorly. To address this issue, we introduce Bayesian Optimization with a Prior for the Optimum (BOPrO). BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO’s standard priors over functions, which are much less intuitive for users. BOPrO then combines these priors with BO’s standard probabilistic model to form a pseudo-posterior used to select which points to evaluate next. We show that BOPrO is around 6.67 × faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application. We also show that BOPrO converges faster even if the priors for the optimum are not entirely accurate and that it robustly recovers from misleading priors.

AB - While Bayesian Optimization (BO) is a very popular method for optimizing expensive black-box functions, it fails to leverage the experience of domain experts. This causes BO to waste function evaluations on bad design choices (e.g., machine learning hyperparameters) that the expert already knows to work poorly. To address this issue, we introduce Bayesian Optimization with a Prior for the Optimum (BOPrO). BOPrO allows users to inject their knowledge into the optimization process in the form of priors about which parts of the input space will yield the best performance, rather than BO’s standard priors over functions, which are much less intuitive for users. BOPrO then combines these priors with BO’s standard probabilistic model to form a pseudo-posterior used to select which points to evaluate next. We show that BOPrO is around 6.67 × faster than state-of-the-art methods on a common suite of benchmarks, and achieves a new state-of-the-art performance on a real-world hardware design application. We also show that BOPrO converges faster even if the priors for the optimum are not entirely accurate and that it robustly recovers from misleading priors.

UR - http://www.scopus.com/inward/record.url?scp=85115712403&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-86523-8_17

DO - 10.1007/978-3-030-86523-8_17

M3 - Conference contribution

AN - SCOPUS:85115712403

SN - 9783030865221

VL - 3

T3 - Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science)

SP - 265

EP - 296

BT - Machine Learning and Knowledge Discovery in Databases. Research Track

A2 - Oliver, Nuria

A2 - Pérez-Cruz, Fernando

A2 - Kramer, Stefan

A2 - Read, Jesse

A2 - Lozano, Jose A.

PB - Springer Nature Switzerland AG

CY - Cham

T2 - European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML PKDD 2021

Y2 - 13 September 2021 through 17 September 2021

ER -

Von denselben Autoren