Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | AutoML Conference 2023 |
Publikationsstatus | Angenommen/Im Druck - 2023 |
Abstract
The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF).
Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited.
In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions.
We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO.
On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure.
The suitability of our method also transfers to HPOBench.
With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand.
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
AutoML Conference 2023. 2023.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Self-Adjusting Weighted Expected Improvement for Bayesian Optimization
AU - Benjamins, Carolin
AU - Raponi, Elena
AU - Jankovic, Anja
AU - Doerr, Carola
AU - Lindauer, Marius
PY - 2023
Y1 - 2023
N2 - Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets.The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF).Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited.In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions.We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO.On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure.The suitability of our method also transfers to HPOBench.With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand.
AB - Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets.The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF).Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited.In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions.We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO.On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure.The suitability of our method also transfers to HPOBench.With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand.
KW - Bayesian Optimization
KW - Acquisition Function
KW - Dynamic Algorithm Configuration
KW - Weighted Expected Improvement
KW - Upper Bound Regret
M3 - Conference contribution
BT - AutoML Conference 2023
ER -