PI is back! Switching Acquisition Functions in Bayesian Optimization

Research output: Working paper/PreprintPreprint

Authors

Research Organisations

View graph of relations

Details

Original languageEnglish
Publication statusE-pub ahead of print - 2 Nov 2022

Abstract

Bayesian Optimization (BO) is a powerful, sample-efficient technique to optimize expensive-to-evaluate functions. Each of the BO components, such as the surrogate model, the acquisition function (AF), or the initial design, is subject to a wide range of design choices. Selecting the right components for a given optimization task is a challenging task, which can have significant impact on the quality of the obtained results. In this work, we initiate the analysis of which AF to favor for which optimization scenarios. To this end, we benchmark SMAC3 using Expected Improvement (EI) and Probability of Improvement (PI) as acquisition functions on the 24 BBOB functions of the COCO environment. We compare their results with those of schedules switching between AFs. One schedule aims to use EI's explorative behavior in the early optimization steps, and then switches to PI for a better exploitation in the final steps. We also compare this to a random schedule and round-robin selection of EI and PI. We observe that dynamic schedules oftentimes outperform any single static one. Our results suggest that a schedule that allocates the first 25 % of the optimization budget to EI and the last 75 % to PI is a reliable default. However, we also observe considerable performance differences for the 24 functions, suggesting that a per-instance allocation, possibly learned on the fly, could offer significant improvement over the state-of-the-art BO designs.

Keywords

    cs.LG

Cite this

PI is back! Switching Acquisition Functions in Bayesian Optimization. / Benjamins, Carolin; Raponi, Elena; Jankovic, Anja et al.
2022.

Research output: Working paper/PreprintPreprint

Benjamins, C., Raponi, E., Jankovic, A., Blom, K. V. D., Santoni, M. L., Lindauer, M., & Doerr, C. (2022). PI is back! Switching Acquisition Functions in Bayesian Optimization. Advance online publication. https://arxiv.org/abs/2211.01455
Benjamins C, Raponi E, Jankovic A, Blom KVD, Santoni ML, Lindauer M et al. PI is back! Switching Acquisition Functions in Bayesian Optimization. 2022 Nov 2. Epub 2022 Nov 2.
Download
@techreport{df0cfce1cd294f468b72382e15c988a3,
title = "PI is back! Switching Acquisition Functions in Bayesian Optimization",
abstract = "Bayesian Optimization (BO) is a powerful, sample-efficient technique to optimize expensive-to-evaluate functions. Each of the BO components, such as the surrogate model, the acquisition function (AF), or the initial design, is subject to a wide range of design choices. Selecting the right components for a given optimization task is a challenging task, which can have significant impact on the quality of the obtained results. In this work, we initiate the analysis of which AF to favor for which optimization scenarios. To this end, we benchmark SMAC3 using Expected Improvement (EI) and Probability of Improvement (PI) as acquisition functions on the 24 BBOB functions of the COCO environment. We compare their results with those of schedules switching between AFs. One schedule aims to use EI's explorative behavior in the early optimization steps, and then switches to PI for a better exploitation in the final steps. We also compare this to a random schedule and round-robin selection of EI and PI. We observe that dynamic schedules oftentimes outperform any single static one. Our results suggest that a schedule that allocates the first 25 % of the optimization budget to EI and the last 75 % to PI is a reliable default. However, we also observe considerable performance differences for the 24 functions, suggesting that a per-instance allocation, possibly learned on the fly, could offer significant improvement over the state-of-the-art BO designs.",
keywords = "cs.LG",
author = "Carolin Benjamins and Elena Raponi and Anja Jankovic and Blom, {Koen van der} and Santoni, {Maria Laura} and Marius Lindauer and Carola Doerr",
note = "2022 NeurIPS Workshop on Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems",
year = "2022",
month = nov,
day = "2",
language = "English",
type = "WorkingPaper",

}

Download

TY - UNPB

T1 - PI is back! Switching Acquisition Functions in Bayesian Optimization

AU - Benjamins, Carolin

AU - Raponi, Elena

AU - Jankovic, Anja

AU - Blom, Koen van der

AU - Santoni, Maria Laura

AU - Lindauer, Marius

AU - Doerr, Carola

N1 - 2022 NeurIPS Workshop on Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems

PY - 2022/11/2

Y1 - 2022/11/2

N2 - Bayesian Optimization (BO) is a powerful, sample-efficient technique to optimize expensive-to-evaluate functions. Each of the BO components, such as the surrogate model, the acquisition function (AF), or the initial design, is subject to a wide range of design choices. Selecting the right components for a given optimization task is a challenging task, which can have significant impact on the quality of the obtained results. In this work, we initiate the analysis of which AF to favor for which optimization scenarios. To this end, we benchmark SMAC3 using Expected Improvement (EI) and Probability of Improvement (PI) as acquisition functions on the 24 BBOB functions of the COCO environment. We compare their results with those of schedules switching between AFs. One schedule aims to use EI's explorative behavior in the early optimization steps, and then switches to PI for a better exploitation in the final steps. We also compare this to a random schedule and round-robin selection of EI and PI. We observe that dynamic schedules oftentimes outperform any single static one. Our results suggest that a schedule that allocates the first 25 % of the optimization budget to EI and the last 75 % to PI is a reliable default. However, we also observe considerable performance differences for the 24 functions, suggesting that a per-instance allocation, possibly learned on the fly, could offer significant improvement over the state-of-the-art BO designs.

AB - Bayesian Optimization (BO) is a powerful, sample-efficient technique to optimize expensive-to-evaluate functions. Each of the BO components, such as the surrogate model, the acquisition function (AF), or the initial design, is subject to a wide range of design choices. Selecting the right components for a given optimization task is a challenging task, which can have significant impact on the quality of the obtained results. In this work, we initiate the analysis of which AF to favor for which optimization scenarios. To this end, we benchmark SMAC3 using Expected Improvement (EI) and Probability of Improvement (PI) as acquisition functions on the 24 BBOB functions of the COCO environment. We compare their results with those of schedules switching between AFs. One schedule aims to use EI's explorative behavior in the early optimization steps, and then switches to PI for a better exploitation in the final steps. We also compare this to a random schedule and round-robin selection of EI and PI. We observe that dynamic schedules oftentimes outperform any single static one. Our results suggest that a schedule that allocates the first 25 % of the optimization budget to EI and the last 75 % to PI is a reliable default. However, we also observe considerable performance differences for the 24 functions, suggesting that a per-instance allocation, possibly learned on the fly, could offer significant improvement over the state-of-the-art BO designs.

KW - cs.LG

M3 - Preprint

BT - PI is back! Switching Acquisition Functions in Bayesian Optimization

ER -

By the same author(s)