Self-Adjusting Weighted Expected Improvement for Bayesian Optimization

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

External Research Organisations

  • Technical University of Munich (TUM)
  • Computer Lab of Paris 6 (Lip6)
  • Centre national de la recherche scientifique (CNRS)
  • Sorbonne Université
View graph of relations

Details

Original languageEnglish
Title of host publicationAutoML Conference 2023
Publication statusAccepted/In press - 2023

Abstract

Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets.
The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF).
Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited.
In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions.
We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO.
On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure.
The suitability of our method also transfers to HPOBench.
With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand.

Keywords

    Bayesian Optimization, Acquisition Function, Dynamic Algorithm Configuration, Weighted Expected Improvement, Upper Bound Regret

Cite this

Self-Adjusting Weighted Expected Improvement for Bayesian Optimization. / Benjamins, Carolin; Raponi, Elena; Jankovic, Anja et al.
AutoML Conference 2023. 2023.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Benjamins, C, Raponi, E, Jankovic, A, Doerr, C & Lindauer, M 2023, Self-Adjusting Weighted Expected Improvement for Bayesian Optimization. in AutoML Conference 2023.
Benjamins, C., Raponi, E., Jankovic, A., Doerr, C., & Lindauer, M. (Accepted/in press). Self-Adjusting Weighted Expected Improvement for Bayesian Optimization. In AutoML Conference 2023
Benjamins C, Raponi E, Jankovic A, Doerr C, Lindauer M. Self-Adjusting Weighted Expected Improvement for Bayesian Optimization. In AutoML Conference 2023. 2023
Benjamins, Carolin ; Raponi, Elena ; Jankovic, Anja et al. / Self-Adjusting Weighted Expected Improvement for Bayesian Optimization. AutoML Conference 2023. 2023.
Download
@inproceedings{7074dc2996724ffaa54e41537c4d2687,
title = "Self-Adjusting Weighted Expected Improvement for Bayesian Optimization",
abstract = "Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets.The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF).Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited.In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions.We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO.On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure.The suitability of our method also transfers to HPOBench.With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand. ",
keywords = "Bayesian Optimization, Acquisition Function, Dynamic Algorithm Configuration, Weighted Expected Improvement, Upper Bound Regret",
author = "Carolin Benjamins and Elena Raponi and Anja Jankovic and Carola Doerr and Marius Lindauer",
year = "2023",
language = "English",
booktitle = "AutoML Conference 2023",

}

Download

TY - GEN

T1 - Self-Adjusting Weighted Expected Improvement for Bayesian Optimization

AU - Benjamins, Carolin

AU - Raponi, Elena

AU - Jankovic, Anja

AU - Doerr, Carola

AU - Lindauer, Marius

PY - 2023

Y1 - 2023

N2 - Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets.The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF).Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited.In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions.We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO.On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure.The suitability of our method also transfers to HPOBench.With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand.

AB - Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets.The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF).Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited.In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions.We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO.On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure.The suitability of our method also transfers to HPOBench.With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand.

KW - Bayesian Optimization

KW - Acquisition Function

KW - Dynamic Algorithm Configuration

KW - Weighted Expected Improvement

KW - Upper Bound Regret

M3 - Conference contribution

BT - AutoML Conference 2023

ER -

By the same author(s)