Details
Original language | English |
---|---|
Title of host publication | Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track) |
Number of pages | 36 |
Publication status | E-pub ahead of print - 2021 |
Event | 35th Conference on Neural Information Processing Systems: Track on Datasets and Benchmarks - Virtual-only Duration: 6 Dec 2021 → 14 Dec 2021 |
Abstract
Keywords
- cs.LG
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track). 2021.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - HPOBench
T2 - 35th Conference on Neural Information Processing Systems
AU - Eggensperger, Katharina
AU - Müller, Philipp
AU - Mallik, Neeratyoy
AU - Feurer, Matthias
AU - Sass, René
AU - Awad, Noor
AU - Lindauer, Marius
AU - Hutter, Frank
PY - 2021
Y1 - 2021
N2 - To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications. Over the last years, the number of efficient algorithms and tools for HPO grew substantially. At the same time, the community is still lacking realistic, diverse, computationally cheap, and standardized benchmarks. This is especially the case for multi-fidelity HPO methods. To close this gap, we propose HPOBench, which includes 7 existing and 5 new benchmark families, with a total of more than 100 multi-fidelity benchmark problems. HPOBench allows to run this extendable set of multi-fidelity HPO benchmarks in a reproducible way by isolating and packaging the individual benchmarks in containers. It also provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations. To demonstrate HPOBench's broad compatibility with various optimization tools, as well as its usefulness, we conduct an exemplary large-scale study evaluating 13 optimizers from 6 optimization tools. We provide HPOBench here: https://github.com/automl/HPOBench.
AB - To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications. Over the last years, the number of efficient algorithms and tools for HPO grew substantially. At the same time, the community is still lacking realistic, diverse, computationally cheap, and standardized benchmarks. This is especially the case for multi-fidelity HPO methods. To close this gap, we propose HPOBench, which includes 7 existing and 5 new benchmark families, with a total of more than 100 multi-fidelity benchmark problems. HPOBench allows to run this extendable set of multi-fidelity HPO benchmarks in a reproducible way by isolating and packaging the individual benchmarks in containers. It also provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations. To demonstrate HPOBench's broad compatibility with various optimization tools, as well as its usefulness, we conduct an exemplary large-scale study evaluating 13 optimizers from 6 optimization tools. We provide HPOBench here: https://github.com/automl/HPOBench.
KW - cs.LG
M3 - Conference contribution
BT - Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track)
Y2 - 6 December 2021 through 14 December 2021
ER -