Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track) |
Seitenumfang | 36 |
Publikationsstatus | Elektronisch veröffentlicht (E-Pub) - 2021 |
Veranstaltung | 35th Conference on Neural Information Processing Systems: Track on Datasets and Benchmarks - Virtual-only Dauer: 6 Dez. 2021 → 14 Dez. 2021 |
Abstract
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track). 2021.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - HPOBench
T2 - 35th Conference on Neural Information Processing Systems
AU - Eggensperger, Katharina
AU - Müller, Philipp
AU - Mallik, Neeratyoy
AU - Feurer, Matthias
AU - Sass, René
AU - Awad, Noor
AU - Lindauer, Marius
AU - Hutter, Frank
PY - 2021
Y1 - 2021
N2 - To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications. Over the last years, the number of efficient algorithms and tools for HPO grew substantially. At the same time, the community is still lacking realistic, diverse, computationally cheap, and standardized benchmarks. This is especially the case for multi-fidelity HPO methods. To close this gap, we propose HPOBench, which includes 7 existing and 5 new benchmark families, with a total of more than 100 multi-fidelity benchmark problems. HPOBench allows to run this extendable set of multi-fidelity HPO benchmarks in a reproducible way by isolating and packaging the individual benchmarks in containers. It also provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations. To demonstrate HPOBench's broad compatibility with various optimization tools, as well as its usefulness, we conduct an exemplary large-scale study evaluating 13 optimizers from 6 optimization tools. We provide HPOBench here: https://github.com/automl/HPOBench.
AB - To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications. Over the last years, the number of efficient algorithms and tools for HPO grew substantially. At the same time, the community is still lacking realistic, diverse, computationally cheap, and standardized benchmarks. This is especially the case for multi-fidelity HPO methods. To close this gap, we propose HPOBench, which includes 7 existing and 5 new benchmark families, with a total of more than 100 multi-fidelity benchmark problems. HPOBench allows to run this extendable set of multi-fidelity HPO benchmarks in a reproducible way by isolating and packaging the individual benchmarks in containers. It also provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations. To demonstrate HPOBench's broad compatibility with various optimization tools, as well as its usefulness, we conduct an exemplary large-scale study evaluating 13 optimizers from 6 optimization tools. We provide HPOBench here: https://github.com/automl/HPOBench.
KW - cs.LG
M3 - Conference contribution
BT - Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track)
Y2 - 6 December 2021 through 14 December 2021
ER -