HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Katharina Eggensperger
  • Philipp Müller
  • Neeratyoy Mallik
  • Matthias Feurer
  • René Sass
  • Noor Awad
  • Marius Lindauer
  • Frank Hutter

Externe Organisationen

  • Albert-Ludwigs-Universität Freiburg
  • Bosch Center for Artificial Intelligence (BCAI)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track)
Seitenumfang36
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 2021
Veranstaltung35th Conference on Neural Information Processing Systems: Track on Datasets and Benchmarks - Virtual-only
Dauer: 6 Dez. 202114 Dez. 2021

Abstract

To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications. Over the last years, the number of efficient algorithms and tools for HPO grew substantially. At the same time, the community is still lacking realistic, diverse, computationally cheap, and standardized benchmarks. This is especially the case for multi-fidelity HPO methods. To close this gap, we propose HPOBench, which includes 7 existing and 5 new benchmark families, with a total of more than 100 multi-fidelity benchmark problems. HPOBench allows to run this extendable set of multi-fidelity HPO benchmarks in a reproducible way by isolating and packaging the individual benchmarks in containers. It also provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations. To demonstrate HPOBench's broad compatibility with various optimization tools, as well as its usefulness, we conduct an exemplary large-scale study evaluating 13 optimizers from 6 optimization tools. We provide HPOBench here: https://github.com/automl/HPOBench.

Zitieren

HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO. / Eggensperger, Katharina; Müller, Philipp; Mallik, Neeratyoy et al.
Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track). 2021.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Eggensperger, K, Müller, P, Mallik, N, Feurer, M, Sass, R, Awad, N, Lindauer, M & Hutter, F 2021, HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO. in Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track). 35th Conference on Neural Information Processing Systems, 6 Dez. 2021. <https://arxiv.org/abs/2109.06716>
Eggensperger, K., Müller, P., Mallik, N., Feurer, M., Sass, R., Awad, N., Lindauer, M., & Hutter, F. (2021). HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO. In Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track) Vorabveröffentlichung online. https://arxiv.org/abs/2109.06716
Eggensperger K, Müller P, Mallik N, Feurer M, Sass R, Awad N et al. HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO. in Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track). 2021 Epub 2021.
Eggensperger, Katharina ; Müller, Philipp ; Mallik, Neeratyoy et al. / HPOBench : A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO. Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track). 2021.
Download
@inproceedings{5cd6869475c141f3bdd3cc0c117d1c6f,
title = "HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO",
abstract = "To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications. Over the last years, the number of efficient algorithms and tools for HPO grew substantially. At the same time, the community is still lacking realistic, diverse, computationally cheap, and standardized benchmarks. This is especially the case for multi-fidelity HPO methods. To close this gap, we propose HPOBench, which includes 7 existing and 5 new benchmark families, with a total of more than 100 multi-fidelity benchmark problems. HPOBench allows to run this extendable set of multi-fidelity HPO benchmarks in a reproducible way by isolating and packaging the individual benchmarks in containers. It also provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations. To demonstrate HPOBench's broad compatibility with various optimization tools, as well as its usefulness, we conduct an exemplary large-scale study evaluating 13 optimizers from 6 optimization tools. We provide HPOBench here: https://github.com/automl/HPOBench. ",
keywords = "cs.LG",
author = "Katharina Eggensperger and Philipp M{\"u}ller and Neeratyoy Mallik and Matthias Feurer and Ren{\'e} Sass and Noor Awad and Marius Lindauer and Frank Hutter",
year = "2021",
language = "English",
booktitle = "Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track)",
note = "35th Conference on Neural Information Processing Systems : Track on Datasets and Benchmarks, NeurIPS 2021 ; Conference date: 06-12-2021 Through 14-12-2021",

}

Download

TY - GEN

T1 - HPOBench

T2 - 35th Conference on Neural Information Processing Systems

AU - Eggensperger, Katharina

AU - Müller, Philipp

AU - Mallik, Neeratyoy

AU - Feurer, Matthias

AU - Sass, René

AU - Awad, Noor

AU - Lindauer, Marius

AU - Hutter, Frank

PY - 2021

Y1 - 2021

N2 - To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications. Over the last years, the number of efficient algorithms and tools for HPO grew substantially. At the same time, the community is still lacking realistic, diverse, computationally cheap, and standardized benchmarks. This is especially the case for multi-fidelity HPO methods. To close this gap, we propose HPOBench, which includes 7 existing and 5 new benchmark families, with a total of more than 100 multi-fidelity benchmark problems. HPOBench allows to run this extendable set of multi-fidelity HPO benchmarks in a reproducible way by isolating and packaging the individual benchmarks in containers. It also provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations. To demonstrate HPOBench's broad compatibility with various optimization tools, as well as its usefulness, we conduct an exemplary large-scale study evaluating 13 optimizers from 6 optimization tools. We provide HPOBench here: https://github.com/automl/HPOBench.

AB - To achieve peak predictive performance, hyperparameter optimization (HPO) is a crucial component of machine learning and its applications. Over the last years, the number of efficient algorithms and tools for HPO grew substantially. At the same time, the community is still lacking realistic, diverse, computationally cheap, and standardized benchmarks. This is especially the case for multi-fidelity HPO methods. To close this gap, we propose HPOBench, which includes 7 existing and 5 new benchmark families, with a total of more than 100 multi-fidelity benchmark problems. HPOBench allows to run this extendable set of multi-fidelity HPO benchmarks in a reproducible way by isolating and packaging the individual benchmarks in containers. It also provides surrogate and tabular benchmarks for computationally affordable yet statistically sound evaluations. To demonstrate HPOBench's broad compatibility with various optimization tools, as well as its usefulness, we conduct an exemplary large-scale study evaluating 13 optimizers from 6 optimization tools. We provide HPOBench here: https://github.com/automl/HPOBench.

KW - cs.LG

M3 - Conference contribution

BT - Proceedings of the international conference on Neural Information Processing Systems (NeurIPS) (Datasets and Benchmarks Track)

Y2 - 6 December 2021 through 14 December 2021

ER -

Von denselben Autoren