Details
Original language | English |
---|---|
Title of host publication | DSO Workshop at IJCAI |
Number of pages | 8 |
Publication status | E-pub ahead of print - 2019 |
Externally published | Yes |
Abstract
Keywords
- cs.LG, cs.AI, stat.ML
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
DSO Workshop at IJCAI. 2019.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Towards Assessing the Impact of Bayesian Optimization’s Own Hyperparameters
AU - Lindauer, Marius
AU - Feurer, Matthias
AU - Eggensperger, Katharina
AU - Biedenkapp, André
AU - Hutter, Frank
N1 - Accepted at DSO workshop (as part of IJCAI'19)
PY - 2019
Y1 - 2019
N2 - Bayesian Optimization (BO) is a common approach for hyperparameter optimization (HPO) in automated machine learning. Although it is well-accepted that HPO is crucial to obtain well-performing machine learning models, tuning BO's own hyperparameters is often neglected. In this paper, we empirically study the impact of optimizing BO's own hyperparameters and the transferability of the found settings using a wide range of benchmarks, including artificial functions, HPO and HPO combined with neural architecture search. In particular, we show (i) that tuning can improve the any-time performance of different BO approaches, that optimized BO settings also perform well (ii) on similar problems and (iii) partially even on problems from other problem families, and (iv) which BO hyperparameters are most important.
AB - Bayesian Optimization (BO) is a common approach for hyperparameter optimization (HPO) in automated machine learning. Although it is well-accepted that HPO is crucial to obtain well-performing machine learning models, tuning BO's own hyperparameters is often neglected. In this paper, we empirically study the impact of optimizing BO's own hyperparameters and the transferability of the found settings using a wide range of benchmarks, including artificial functions, HPO and HPO combined with neural architecture search. In particular, we show (i) that tuning can improve the any-time performance of different BO approaches, that optimized BO settings also perform well (ii) on similar problems and (iii) partially even on problems from other problem families, and (iv) which BO hyperparameters are most important.
KW - cs.LG
KW - cs.AI
KW - stat.ML
M3 - Conference contribution
BT - DSO Workshop at IJCAI
ER -