Details
Original language | English |
---|---|
Publication status | E-pub ahead of print - 22 Aug 2019 |
Externally published | Yes |
Abstract
Keywords
- cs.LG, cs.AI, cs.SY, eess.SY, stat.ML
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2019.
Research output: Working paper/Preprint › Preprint
}
TY - UNPB
T1 - Towards White-box Benchmarks for Algorithm Control
AU - Biedenkapp, André
AU - Bozkurt, H. Furkan
AU - Hutter, Frank
AU - Lindauer, Marius
N1 - 8 pages, 9 figures
PY - 2019/8/22
Y1 - 2019/8/22
N2 - The performance of many algorithms in the fields of hard combinatorial problem solving, machine learning or AI in general depends on tuned hyperparameter configurations. Automated methods have been proposed to alleviate users from the tedious and error-prone task of manually searching for performance-optimized configurations across a set of problem instances. However there is still a lot of untapped potential through adjusting an algorithm's hyperparameters online since different hyperparameters are potentially optimal at different stages of the algorithm. We formulate the problem of adjusting an algorithm's hyperparameters for a given instance on the fly as a contextual MDP, making reinforcement learning (RL) the prime candidate to solve the resulting algorithm control problem in a data-driven way. Furthermore, inspired by applications of algorithm configuration, we introduce new white-box benchmarks suitable to study algorithm control. We show that on short sequences, algorithm configuration is a valid choice, but that with increasing sequence length a black-box view on the problem quickly becomes infeasible and RL performs better.
AB - The performance of many algorithms in the fields of hard combinatorial problem solving, machine learning or AI in general depends on tuned hyperparameter configurations. Automated methods have been proposed to alleviate users from the tedious and error-prone task of manually searching for performance-optimized configurations across a set of problem instances. However there is still a lot of untapped potential through adjusting an algorithm's hyperparameters online since different hyperparameters are potentially optimal at different stages of the algorithm. We formulate the problem of adjusting an algorithm's hyperparameters for a given instance on the fly as a contextual MDP, making reinforcement learning (RL) the prime candidate to solve the resulting algorithm control problem in a data-driven way. Furthermore, inspired by applications of algorithm configuration, we introduce new white-box benchmarks suitable to study algorithm control. We show that on short sequences, algorithm configuration is a valid choice, but that with increasing sequence length a black-box view on the problem quickly becomes infeasible and RL performs better.
KW - cs.LG
KW - cs.AI
KW - cs.SY
KW - eess.SY
KW - stat.ML
M3 - Preprint
BT - Towards White-box Benchmarks for Algorithm Control
ER -