Symbolic Explanations for Hyperparameter Optimization

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

Research Organisations

External Research Organisations

  • Munich Center for Machine Learning (MCML)
  • Ludwig-Maximilians-Universität München (LMU)
View graph of relations

Details

Original languageEnglish
Title of host publicationAutoML Conference 2023
Publication statusE-pub ahead of print - 16 May 2023

Abstract

Hyperparameter optimization (HPO) methods can determine well-performing hyperparameter configurations efficiently but often lack insights and transparency. We propose to apply symbolic regression to meta-data collected with Bayesian optimization (BO) during HPO. In contrast to prior approaches explaining the effects of hyperparameters on model performance, symbolic regression allows for obtaining explicit formulas quantifying the relation between hyperparameter values and model performance. Overall, our approach aims to make the HPO process more explainable and human-centered, addressing the needs of multiple user groups: First, providing insights into the HPO process can support data scientists and machine learning practitioners in their decisions when using and interacting with HPO tools. Second, obtaining explicit formulas and inspecting their properties could help researchers better understand the HPO loss landscape. In an experimental evaluation, we find that naively applying symbolic regression directly to meta-data collected during HPO is affected by the sampling bias introduced by BO. However, the true underlying loss landscape can be approximated by fitting the symbolic regression on the surrogate model trained during BO. By penalizing longer formulas, symbolic regression furthermore allows the user to decide how to balance the accuracy and explainability of the resulting formulas.

Keywords

    Automated Machine Learning, Hyperparameter Optimization, Interpretable Machine Learning, Symbolic Regression

Cite this

Symbolic Explanations for Hyperparameter Optimization. / Segel, Sarah; Graf, Helena; Tornede, Alexander et al.
AutoML Conference 2023. 2023.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Segel, S., Graf, H., Tornede, A., Bischl, B., & Lindauer, M. (2023). Symbolic Explanations for Hyperparameter Optimization. In AutoML Conference 2023 Advance online publication. https://openreview.net/forum?id=JQwAc91sg_x
Segel S, Graf H, Tornede A, Bischl B, Lindauer M. Symbolic Explanations for Hyperparameter Optimization. In AutoML Conference 2023. 2023 Epub 2023 May 16.
Segel, Sarah ; Graf, Helena ; Tornede, Alexander et al. / Symbolic Explanations for Hyperparameter Optimization. AutoML Conference 2023. 2023.
Download
@inproceedings{150eba5f41004472995aba5e7205ad6e,
title = "Symbolic Explanations for Hyperparameter Optimization",
abstract = "Hyperparameter optimization (HPO) methods can determine well-performing hyperparameter configurations efficiently but often lack insights and transparency. We propose to apply symbolic regression to meta-data collected with Bayesian optimization (BO) during HPO. In contrast to prior approaches explaining the effects of hyperparameters on model performance, symbolic regression allows for obtaining explicit formulas quantifying the relation between hyperparameter values and model performance. Overall, our approach aims to make the HPO process more explainable and human-centered, addressing the needs of multiple user groups: First, providing insights into the HPO process can support data scientists and machine learning practitioners in their decisions when using and interacting with HPO tools. Second, obtaining explicit formulas and inspecting their properties could help researchers better understand the HPO loss landscape. In an experimental evaluation, we find that naively applying symbolic regression directly to meta-data collected during HPO is affected by the sampling bias introduced by BO. However, the true underlying loss landscape can be approximated by fitting the symbolic regression on the surrogate model trained during BO. By penalizing longer formulas, symbolic regression furthermore allows the user to decide how to balance the accuracy and explainability of the resulting formulas.",
keywords = "Automated Machine Learning, Hyperparameter Optimization, Interpretable Machine Learning, Symbolic Regression",
author = "Sarah Segel and Helena Graf and Alexander Tornede and Bernd Bischl and Marius Lindauer",
note = "Acknowledgements Funded by the European Union (ERC, “ixAutoML”, grant no.101041029). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. The authors gratefully acknowledge the computing time provided to them on the highperformance computers Noctua2 at the NHR Center PC2 under the project hpc-prf-intexml. These are funded by the Federal Ministry of Education and Research and the state governments participating on the basis of the resolutions of the GWK for the national high performance computing at universities (www.nhr-verein.de/unsere-partner).",
year = "2023",
month = may,
day = "16",
language = "English",
booktitle = "AutoML Conference 2023",

}

Download

TY - GEN

T1 - Symbolic Explanations for Hyperparameter Optimization

AU - Segel, Sarah

AU - Graf, Helena

AU - Tornede, Alexander

AU - Bischl, Bernd

AU - Lindauer, Marius

N1 - Acknowledgements Funded by the European Union (ERC, “ixAutoML”, grant no.101041029). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. The authors gratefully acknowledge the computing time provided to them on the highperformance computers Noctua2 at the NHR Center PC2 under the project hpc-prf-intexml. These are funded by the Federal Ministry of Education and Research and the state governments participating on the basis of the resolutions of the GWK for the national high performance computing at universities (www.nhr-verein.de/unsere-partner).

PY - 2023/5/16

Y1 - 2023/5/16

N2 - Hyperparameter optimization (HPO) methods can determine well-performing hyperparameter configurations efficiently but often lack insights and transparency. We propose to apply symbolic regression to meta-data collected with Bayesian optimization (BO) during HPO. In contrast to prior approaches explaining the effects of hyperparameters on model performance, symbolic regression allows for obtaining explicit formulas quantifying the relation between hyperparameter values and model performance. Overall, our approach aims to make the HPO process more explainable and human-centered, addressing the needs of multiple user groups: First, providing insights into the HPO process can support data scientists and machine learning practitioners in their decisions when using and interacting with HPO tools. Second, obtaining explicit formulas and inspecting their properties could help researchers better understand the HPO loss landscape. In an experimental evaluation, we find that naively applying symbolic regression directly to meta-data collected during HPO is affected by the sampling bias introduced by BO. However, the true underlying loss landscape can be approximated by fitting the symbolic regression on the surrogate model trained during BO. By penalizing longer formulas, symbolic regression furthermore allows the user to decide how to balance the accuracy and explainability of the resulting formulas.

AB - Hyperparameter optimization (HPO) methods can determine well-performing hyperparameter configurations efficiently but often lack insights and transparency. We propose to apply symbolic regression to meta-data collected with Bayesian optimization (BO) during HPO. In contrast to prior approaches explaining the effects of hyperparameters on model performance, symbolic regression allows for obtaining explicit formulas quantifying the relation between hyperparameter values and model performance. Overall, our approach aims to make the HPO process more explainable and human-centered, addressing the needs of multiple user groups: First, providing insights into the HPO process can support data scientists and machine learning practitioners in their decisions when using and interacting with HPO tools. Second, obtaining explicit formulas and inspecting their properties could help researchers better understand the HPO loss landscape. In an experimental evaluation, we find that naively applying symbolic regression directly to meta-data collected during HPO is affected by the sampling bias introduced by BO. However, the true underlying loss landscape can be approximated by fitting the symbolic regression on the surrogate model trained during BO. By penalizing longer formulas, symbolic regression furthermore allows the user to decide how to balance the accuracy and explainability of the resulting formulas.

KW - Automated Machine Learning

KW - Hyperparameter Optimization

KW - Interpretable Machine Learning

KW - Symbolic Regression

M3 - Conference contribution

BT - AutoML Conference 2023

ER -

By the same author(s)