Enhancing Explainability of Hyperparameter Optimization via Bayesian Algorithm Execution

Publikation: Arbeitspapier/PreprintPreprint

Autoren

Organisationseinheiten

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 11 Juni 2022

Abstract

Despite all the benefits of automated hyperparameter optimization (HPO), most modern HPO algorithms are black-boxes themselves. This makes it difficult to understand the decision process which lead to the selected configuration, reduces trust in HPO, and thus hinders its broad adoption. Here, we study the combination of HPO with interpretable machine learning (IML) methods such as partial dependence plots. However, if such methods are naively applied to the experimental data of the HPO process in a post-hoc manner, the underlying sampling bias of the optimizer can distort interpretations. We propose a modified HPO method which efficiently balances the search for the global optimum w.r.t. predictive performance and the reliable estimation of IML explanations of an underlying black-box function by coupling Bayesian optimization and Bayesian Algorithm Execution. On benchmark cases of both synthetic objectives and HPO of a neural network, we demonstrate that our method returns more reliable explanations of the underlying black-box without a loss of optimization performance.

Zitieren

Enhancing Explainability of Hyperparameter Optimization via Bayesian Algorithm Execution. / Moosbauer, Julia; Casalicchio, Giuseppe; Lindauer, Marius et al.
2022.

Publikation: Arbeitspapier/PreprintPreprint

Moosbauer J, Casalicchio G, Lindauer M, Bischl B. Enhancing Explainability of Hyperparameter Optimization via Bayesian Algorithm Execution. 2022 Jun 11. Epub 2022 Jun 11. doi: 10.48550/arXiv.2206.05447
Download
@techreport{738b79fe27834cd6ba4b98716c851398,
title = "Enhancing Explainability of Hyperparameter Optimization via Bayesian Algorithm Execution",
abstract = "Despite all the benefits of automated hyperparameter optimization (HPO), most modern HPO algorithms are black-boxes themselves. This makes it difficult to understand the decision process which lead to the selected configuration, reduces trust in HPO, and thus hinders its broad adoption. Here, we study the combination of HPO with interpretable machine learning (IML) methods such as partial dependence plots. However, if such methods are naively applied to the experimental data of the HPO process in a post-hoc manner, the underlying sampling bias of the optimizer can distort interpretations. We propose a modified HPO method which efficiently balances the search for the global optimum w.r.t. predictive performance and the reliable estimation of IML explanations of an underlying black-box function by coupling Bayesian optimization and Bayesian Algorithm Execution. On benchmark cases of both synthetic objectives and HPO of a neural network, we demonstrate that our method returns more reliable explanations of the underlying black-box without a loss of optimization performance.",
keywords = "cs.LG, stat.ML",
author = "Julia Moosbauer and Giuseppe Casalicchio and Marius Lindauer and Bernd Bischl",
year = "2022",
month = jun,
day = "11",
doi = "10.48550/arXiv.2206.05447",
language = "English",
type = "WorkingPaper",

}

Download

TY - UNPB

T1 - Enhancing Explainability of Hyperparameter Optimization via Bayesian Algorithm Execution

AU - Moosbauer, Julia

AU - Casalicchio, Giuseppe

AU - Lindauer, Marius

AU - Bischl, Bernd

PY - 2022/6/11

Y1 - 2022/6/11

N2 - Despite all the benefits of automated hyperparameter optimization (HPO), most modern HPO algorithms are black-boxes themselves. This makes it difficult to understand the decision process which lead to the selected configuration, reduces trust in HPO, and thus hinders its broad adoption. Here, we study the combination of HPO with interpretable machine learning (IML) methods such as partial dependence plots. However, if such methods are naively applied to the experimental data of the HPO process in a post-hoc manner, the underlying sampling bias of the optimizer can distort interpretations. We propose a modified HPO method which efficiently balances the search for the global optimum w.r.t. predictive performance and the reliable estimation of IML explanations of an underlying black-box function by coupling Bayesian optimization and Bayesian Algorithm Execution. On benchmark cases of both synthetic objectives and HPO of a neural network, we demonstrate that our method returns more reliable explanations of the underlying black-box without a loss of optimization performance.

AB - Despite all the benefits of automated hyperparameter optimization (HPO), most modern HPO algorithms are black-boxes themselves. This makes it difficult to understand the decision process which lead to the selected configuration, reduces trust in HPO, and thus hinders its broad adoption. Here, we study the combination of HPO with interpretable machine learning (IML) methods such as partial dependence plots. However, if such methods are naively applied to the experimental data of the HPO process in a post-hoc manner, the underlying sampling bias of the optimizer can distort interpretations. We propose a modified HPO method which efficiently balances the search for the global optimum w.r.t. predictive performance and the reliable estimation of IML explanations of an underlying black-box function by coupling Bayesian optimization and Bayesian Algorithm Execution. On benchmark cases of both synthetic objectives and HPO of a neural network, we demonstrate that our method returns more reliable explanations of the underlying black-box without a loss of optimization performance.

KW - cs.LG

KW - stat.ML

U2 - 10.48550/arXiv.2206.05447

DO - 10.48550/arXiv.2206.05447

M3 - Preprint

BT - Enhancing Explainability of Hyperparameter Optimization via Bayesian Algorithm Execution

ER -

Von denselben Autoren