Best Arm Identification with Retroactively Increased Sampling Budget for More Resource-Efficient HPO

Publikation: KonferenzbeitragPaperForschungPeer-Review

Autoren

Externe Organisationen

  • Universität Paderborn
  • Ludwig-Maximilians-Universität München (LMU)
  • Munich Center for Machine Learning (MCML)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten3742-3750
PublikationsstatusVeröffentlicht - Aug. 2024
Extern publiziertJa
VeranstaltungThirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24 - Jeju, Südkorea
Dauer: 19 Aug. 202325 Aug. 2023

Konferenz

KonferenzThirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24
KurztitelIJCAI-24
Land/GebietSüdkorea
OrtJeju
Zeitraum19 Aug. 202325 Aug. 2023

Abstract

Hyperparameter optimization (HPO) is indispensable for achieving optimal performance in machine learning tasks. A popular class of methods in this regard is based on Successive Halving (SHA), which casts HPO into a pure-exploration multi-armed bandit problem under finite sampling budget constraints. This is accomplished by considering hyperparameter configurations as arms and rewards as the negative validation losses. While enjoying theoretical guarantees as well as working well in practice, SHA comes, however, with several hyperparameters itself, one of which is the maximum budget that can be allocated to evaluate a single arm (hyperparameter configuration). Although there are already solutions to this meta hyperparameter optimization problem, such as the doubling trick or asynchronous extensions of SHA, these are either practically inefficient or lack theoretical guarantees. In this paper, we propose incremental SHA (iSHA), a synchronous extension of SHA, allowing to increase the maximum budget a posteriori while still enjoying theoretical guarantees. Our empirical analysis of HPO problems corroborates our theoretical findings and shows that iSHA is more resource-efficient than existing SHA-based approaches.

Zitieren

Best Arm Identification with Retroactively Increased Sampling Budget for More Resource-Efficient HPO. / Brandt, Jasmin; Wever, Marcel; Bengs, Viktor et al.
2024. 3742-3750 Beitrag in Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, Jeju, Südkorea.

Publikation: KonferenzbeitragPaperForschungPeer-Review

Brandt, J, Wever, M, Bengs, V & Hüllermeier, E 2024, 'Best Arm Identification with Retroactively Increased Sampling Budget for More Resource-Efficient HPO', Beitrag in Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, Jeju, Südkorea, 19 Aug. 2023 - 25 Aug. 2023 S. 3742-3750. https://doi.org/10.24963/ijcai.2024/414
Brandt, J., Wever, M., Bengs, V., & Hüllermeier, E. (2024). Best Arm Identification with Retroactively Increased Sampling Budget for More Resource-Efficient HPO. 3742-3750. Beitrag in Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, Jeju, Südkorea. https://doi.org/10.24963/ijcai.2024/414
Brandt J, Wever M, Bengs V, Hüllermeier E. Best Arm Identification with Retroactively Increased Sampling Budget for More Resource-Efficient HPO. 2024. Beitrag in Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, Jeju, Südkorea. doi: 10.24963/ijcai.2024/414
Brandt, Jasmin ; Wever, Marcel ; Bengs, Viktor et al. / Best Arm Identification with Retroactively Increased Sampling Budget for More Resource-Efficient HPO. Beitrag in Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, Jeju, Südkorea.
Download
@conference{eddab4ba39df452b979bdef017bacd0d,
title = "Best Arm Identification with Retroactively Increased Sampling Budget for More Resource-Efficient HPO",
abstract = "Hyperparameter optimization (HPO) is indispensable for achieving optimal performance in machine learning tasks. A popular class of methods in this regard is based on Successive Halving (SHA), which casts HPO into a pure-exploration multi-armed bandit problem under finite sampling budget constraints. This is accomplished by considering hyperparameter configurations as arms and rewards as the negative validation losses. While enjoying theoretical guarantees as well as working well in practice, SHA comes, however, with several hyperparameters itself, one of which is the maximum budget that can be allocated to evaluate a single arm (hyperparameter configuration). Although there are already solutions to this meta hyperparameter optimization problem, such as the doubling trick or asynchronous extensions of SHA, these are either practically inefficient or lack theoretical guarantees. In this paper, we propose incremental SHA (iSHA), a synchronous extension of SHA, allowing to increase the maximum budget a posteriori while still enjoying theoretical guarantees. Our empirical analysis of HPO problems corroborates our theoretical findings and shows that iSHA is more resource-efficient than existing SHA-based approaches.",
author = "Jasmin Brandt and Marcel Wever and Viktor Bengs and Eyke H{\"u}llermeier",
year = "2024",
month = aug,
doi = "10.24963/ijcai.2024/414",
language = "English",
pages = "3742--3750",
note = "Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, IJCAI-24 ; Conference date: 19-08-2023 Through 25-08-2023",

}

Download

TY - CONF

T1 - Best Arm Identification with Retroactively Increased Sampling Budget for More Resource-Efficient HPO

AU - Brandt, Jasmin

AU - Wever, Marcel

AU - Bengs, Viktor

AU - Hüllermeier, Eyke

PY - 2024/8

Y1 - 2024/8

N2 - Hyperparameter optimization (HPO) is indispensable for achieving optimal performance in machine learning tasks. A popular class of methods in this regard is based on Successive Halving (SHA), which casts HPO into a pure-exploration multi-armed bandit problem under finite sampling budget constraints. This is accomplished by considering hyperparameter configurations as arms and rewards as the negative validation losses. While enjoying theoretical guarantees as well as working well in practice, SHA comes, however, with several hyperparameters itself, one of which is the maximum budget that can be allocated to evaluate a single arm (hyperparameter configuration). Although there are already solutions to this meta hyperparameter optimization problem, such as the doubling trick or asynchronous extensions of SHA, these are either practically inefficient or lack theoretical guarantees. In this paper, we propose incremental SHA (iSHA), a synchronous extension of SHA, allowing to increase the maximum budget a posteriori while still enjoying theoretical guarantees. Our empirical analysis of HPO problems corroborates our theoretical findings and shows that iSHA is more resource-efficient than existing SHA-based approaches.

AB - Hyperparameter optimization (HPO) is indispensable for achieving optimal performance in machine learning tasks. A popular class of methods in this regard is based on Successive Halving (SHA), which casts HPO into a pure-exploration multi-armed bandit problem under finite sampling budget constraints. This is accomplished by considering hyperparameter configurations as arms and rewards as the negative validation losses. While enjoying theoretical guarantees as well as working well in practice, SHA comes, however, with several hyperparameters itself, one of which is the maximum budget that can be allocated to evaluate a single arm (hyperparameter configuration). Although there are already solutions to this meta hyperparameter optimization problem, such as the doubling trick or asynchronous extensions of SHA, these are either practically inefficient or lack theoretical guarantees. In this paper, we propose incremental SHA (iSHA), a synchronous extension of SHA, allowing to increase the maximum budget a posteriori while still enjoying theoretical guarantees. Our empirical analysis of HPO problems corroborates our theoretical findings and shows that iSHA is more resource-efficient than existing SHA-based approaches.

U2 - 10.24963/ijcai.2024/414

DO - 10.24963/ijcai.2024/414

M3 - Paper

SP - 3742

EP - 3750

T2 - Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24

Y2 - 19 August 2023 through 25 August 2023

ER -

Von denselben Autoren