Practitioner Motives to Select Hyperparameter Optimization Methods

Research output: Working paper/PreprintPreprint

Authors

  • Niklas Hasebrook
  • Felix Morsbach
  • Niclas Kannengießer
  • Marc Zöller
  • Jörg Franke
  • Marius Lindauer
  • Frank Hutter
  • Ali Sunyaev

Research Organisations

External Research Organisations

  • Karlsruhe Institute of Technology (KIT)
  • USU Software AG
  • University of Freiburg
View graph of relations

Details

Original languageEnglish
Publication statusE-pub ahead of print - 3 Mar 2022

Abstract

Advanced programmatic hyperparameter optimization (HPO) methods, such as Bayesian optimization, have high sample efficiency in reproducibly finding optimal hyperparameter values of machine learning (ML) models. Yet, ML practitioners often apply less sample-efficient HPO methods, such as grid search, which often results in under-optimized ML models. As a reason for this behavior, we suspect practitioners choose HPO methods based on individual motives, consisting of contextual factors and individual goals. However, practitioners' motives still need to be clarified, hindering the evaluation of HPO methods for achieving specific goals and the user-centered development of HPO tools. To understand practitioners' motives for using specific HPO methods, we used a mixed-methods approach involving 20 semi-structured interviews and a survey study with 71 ML experts to gather evidence of the external validity of the interview results. By presenting six main goals (e.g., improving model understanding) and 14 contextual factors affecting practitioners' selection of HPO methods (e.g., available computer resources), our study explains why practitioners use HPO methods that seem inappropriate at first glance. This study lays a foundation for designing user-centered and context-adaptive HPO tools and, thus, linking social and technical research on HPO.

Keywords

    cs.LG

Cite this

Practitioner Motives to Select Hyperparameter Optimization Methods. / Hasebrook, Niklas; Morsbach, Felix; Kannengießer, Niclas et al.
2022.

Research output: Working paper/PreprintPreprint

Hasebrook, N, Morsbach, F, Kannengießer, N, Zöller, M, Franke, J, Lindauer, M, Hutter, F & Sunyaev, A 2022 'Practitioner Motives to Select Hyperparameter Optimization Methods'. https://doi.org/10.48550/arXiv.2203.01717
Hasebrook, N., Morsbach, F., Kannengießer, N., Zöller, M., Franke, J., Lindauer, M., Hutter, F., & Sunyaev, A. (2022). Practitioner Motives to Select Hyperparameter Optimization Methods. Advance online publication. https://doi.org/10.48550/arXiv.2203.01717
Hasebrook N, Morsbach F, Kannengießer N, Zöller M, Franke J, Lindauer M et al. Practitioner Motives to Select Hyperparameter Optimization Methods. 2022 Mar 3. Epub 2022 Mar 3. doi: 10.48550/arXiv.2203.01717
Hasebrook, Niklas ; Morsbach, Felix ; Kannengießer, Niclas et al. / Practitioner Motives to Select Hyperparameter Optimization Methods. 2022.
Download
@techreport{5fa01769f84f4d56b80a1932c9124aff,
title = "Practitioner Motives to Select Hyperparameter Optimization Methods",
abstract = "Advanced programmatic hyperparameter optimization (HPO) methods, such as Bayesian optimization, have high sample efficiency in reproducibly finding optimal hyperparameter values of machine learning (ML) models. Yet, ML practitioners often apply less sample-efficient HPO methods, such as grid search, which often results in under-optimized ML models. As a reason for this behavior, we suspect practitioners choose HPO methods based on individual motives, consisting of contextual factors and individual goals. However, practitioners' motives still need to be clarified, hindering the evaluation of HPO methods for achieving specific goals and the user-centered development of HPO tools. To understand practitioners' motives for using specific HPO methods, we used a mixed-methods approach involving 20 semi-structured interviews and a survey study with 71 ML experts to gather evidence of the external validity of the interview results. By presenting six main goals (e.g., improving model understanding) and 14 contextual factors affecting practitioners' selection of HPO methods (e.g., available computer resources), our study explains why practitioners use HPO methods that seem inappropriate at first glance. This study lays a foundation for designing user-centered and context-adaptive HPO tools and, thus, linking social and technical research on HPO.",
keywords = "cs.LG",
author = "Niklas Hasebrook and Felix Morsbach and Niclas Kannengie{\ss}er and Marc Z{\"o}ller and J{\"o}rg Franke and Marius Lindauer and Frank Hutter and Ali Sunyaev",
note = "submitted to JMLR; currently under review",
year = "2022",
month = mar,
day = "3",
doi = "10.48550/arXiv.2203.01717",
language = "English",
type = "WorkingPaper",

}

Download

TY - UNPB

T1 - Practitioner Motives to Select Hyperparameter Optimization Methods

AU - Hasebrook, Niklas

AU - Morsbach, Felix

AU - Kannengießer, Niclas

AU - Zöller, Marc

AU - Franke, Jörg

AU - Lindauer, Marius

AU - Hutter, Frank

AU - Sunyaev, Ali

N1 - submitted to JMLR; currently under review

PY - 2022/3/3

Y1 - 2022/3/3

N2 - Advanced programmatic hyperparameter optimization (HPO) methods, such as Bayesian optimization, have high sample efficiency in reproducibly finding optimal hyperparameter values of machine learning (ML) models. Yet, ML practitioners often apply less sample-efficient HPO methods, such as grid search, which often results in under-optimized ML models. As a reason for this behavior, we suspect practitioners choose HPO methods based on individual motives, consisting of contextual factors and individual goals. However, practitioners' motives still need to be clarified, hindering the evaluation of HPO methods for achieving specific goals and the user-centered development of HPO tools. To understand practitioners' motives for using specific HPO methods, we used a mixed-methods approach involving 20 semi-structured interviews and a survey study with 71 ML experts to gather evidence of the external validity of the interview results. By presenting six main goals (e.g., improving model understanding) and 14 contextual factors affecting practitioners' selection of HPO methods (e.g., available computer resources), our study explains why practitioners use HPO methods that seem inappropriate at first glance. This study lays a foundation for designing user-centered and context-adaptive HPO tools and, thus, linking social and technical research on HPO.

AB - Advanced programmatic hyperparameter optimization (HPO) methods, such as Bayesian optimization, have high sample efficiency in reproducibly finding optimal hyperparameter values of machine learning (ML) models. Yet, ML practitioners often apply less sample-efficient HPO methods, such as grid search, which often results in under-optimized ML models. As a reason for this behavior, we suspect practitioners choose HPO methods based on individual motives, consisting of contextual factors and individual goals. However, practitioners' motives still need to be clarified, hindering the evaluation of HPO methods for achieving specific goals and the user-centered development of HPO tools. To understand practitioners' motives for using specific HPO methods, we used a mixed-methods approach involving 20 semi-structured interviews and a survey study with 71 ML experts to gather evidence of the external validity of the interview results. By presenting six main goals (e.g., improving model understanding) and 14 contextual factors affecting practitioners' selection of HPO methods (e.g., available computer resources), our study explains why practitioners use HPO methods that seem inappropriate at first glance. This study lays a foundation for designing user-centered and context-adaptive HPO tools and, thus, linking social and technical research on HPO.

KW - cs.LG

U2 - 10.48550/arXiv.2203.01717

DO - 10.48550/arXiv.2203.01717

M3 - Preprint

BT - Practitioner Motives to Select Hyperparameter Optimization Methods

ER -

By the same author(s)