Details
Original language | English |
---|---|
Pages | 3742-3750 |
Number of pages | 9 |
Publication status | Published - Aug 2024 |
Externally published | Yes |
Event | Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24 - Jeju, Korea, Republic of Duration: 19 Aug 2023 → 25 Aug 2023 |
Conference
Conference | Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24 |
---|---|
Abbreviated title | IJCAI-24 |
Country/Territory | Korea, Republic of |
City | Jeju |
Period | 19 Aug 2023 → 25 Aug 2023 |
Abstract
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2024. 3742-3750 Paper presented at Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24, Jeju, Korea, Republic of.
Research output: Contribution to conference › Paper › Research › peer review
}
TY - CONF
T1 - Best Arm Identification with Retroactively Increased Sampling Budget for More Resource-Efficient HPO
AU - Brandt, Jasmin
AU - Wever, Marcel
AU - Bengs, Viktor
AU - Hüllermeier, Eyke
PY - 2024/8
Y1 - 2024/8
N2 - Hyperparameter optimization (HPO) is indispensable for achieving optimal performance in machine learning tasks. A popular class of methods in this regard is based on Successive Halving (SHA), which casts HPO into a pure-exploration multi-armed bandit problem under finite sampling budget constraints. This is accomplished by considering hyperparameter configurations as arms and rewards as the negative validation losses. While enjoying theoretical guarantees as well as working well in practice, SHA comes, however, with several hyperparameters itself, one of which is the maximum budget that can be allocated to evaluate a single arm (hyperparameter configuration). Although there are already solutions to this meta hyperparameter optimization problem, such as the doubling trick or asynchronous extensions of SHA, these are either practically inefficient or lack theoretical guarantees. In this paper, we propose incremental SHA (iSHA), a synchronous extension of SHA, allowing to increase the maximum budget a posteriori while still enjoying theoretical guarantees. Our empirical analysis of HPO problems corroborates our theoretical findings and shows that iSHA is more resource-efficient than existing SHA-based approaches.
AB - Hyperparameter optimization (HPO) is indispensable for achieving optimal performance in machine learning tasks. A popular class of methods in this regard is based on Successive Halving (SHA), which casts HPO into a pure-exploration multi-armed bandit problem under finite sampling budget constraints. This is accomplished by considering hyperparameter configurations as arms and rewards as the negative validation losses. While enjoying theoretical guarantees as well as working well in practice, SHA comes, however, with several hyperparameters itself, one of which is the maximum budget that can be allocated to evaluate a single arm (hyperparameter configuration). Although there are already solutions to this meta hyperparameter optimization problem, such as the doubling trick or asynchronous extensions of SHA, these are either practically inefficient or lack theoretical guarantees. In this paper, we propose incremental SHA (iSHA), a synchronous extension of SHA, allowing to increase the maximum budget a posteriori while still enjoying theoretical guarantees. Our empirical analysis of HPO problems corroborates our theoretical findings and shows that iSHA is more resource-efficient than existing SHA-based approaches.
UR - http://www.scopus.com/inward/record.url?scp=85204300138&partnerID=8YFLogxK
U2 - 10.24963/ijcai.2024/414
DO - 10.24963/ijcai.2024/414
M3 - Paper
SP - 3742
EP - 3750
T2 - Thirty-Third International Joint Conference on Artificial Intelligence, IJCAI-24
Y2 - 19 August 2023 through 25 August 2023
ER -