AutoRL Hyperparameter Landscapes

Research output: Chapter in book/report/conference proceedingConference contributionResearch

View graph of relations

Details

Original languageEnglish
Title of host publicationSecond International Conference on Automated Machine Learning
Publication statusE-pub ahead of print - 20 Jul 2023

Abstract

Although Reinforcement Learning (RL) has shown to be capable of producing impressive results, its use is limited by the impact of its hyperparameters on performance. This often makes it difficult to achieve good results in practice. Automated RL (AutoRL) addresses this difficulty, yet little is known about the dynamics of the hyperparameter landscapes that hyperparameter optimization (HPO) methods traverse in search of optimal configurations. In view of existing AutoRL approaches dynamically adjusting hyperparameter configurations, we propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training. Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN and SAC) in different kinds of environments (Cartpole and Hopper). This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.

Keywords

    Reinforcement learning, AutoML, Hyperparameter optimization

Cite this

AutoRL Hyperparameter Landscapes. / Mohan, Aditya; Benjamins, Carolin; Wienecke, Konrad et al.
Second International Conference on Automated Machine Learning. 2023.

Research output: Chapter in book/report/conference proceedingConference contributionResearch

Mohan, A, Benjamins, C, Wienecke, K, Dockhorn, A & Lindauer, M 2023, AutoRL Hyperparameter Landscapes. in Second International Conference on Automated Machine Learning. https://doi.org/10.48550/arXiv.2304.02396
Mohan, A., Benjamins, C., Wienecke, K., Dockhorn, A., & Lindauer, M. (2023). AutoRL Hyperparameter Landscapes. In Second International Conference on Automated Machine Learning Advance online publication. https://doi.org/10.48550/arXiv.2304.02396
Mohan A, Benjamins C, Wienecke K, Dockhorn A, Lindauer M. AutoRL Hyperparameter Landscapes. In Second International Conference on Automated Machine Learning. 2023 Epub 2023 Jul 20. doi: 10.48550/arXiv.2304.02396
Mohan, Aditya ; Benjamins, Carolin ; Wienecke, Konrad et al. / AutoRL Hyperparameter Landscapes. Second International Conference on Automated Machine Learning. 2023.
Download
@inproceedings{13e8453dfc8c46b58ef9c3b318d5952a,
title = "AutoRL Hyperparameter Landscapes",
abstract = "Although Reinforcement Learning (RL) has shown to be capable of producing impressive results, its use is limited by the impact of its hyperparameters on performance. This often makes it difficult to achieve good results in practice. Automated RL (AutoRL) addresses this difficulty, yet little is known about the dynamics of the hyperparameter landscapes that hyperparameter optimization (HPO) methods traverse in search of optimal configurations. In view of existing AutoRL approaches dynamically adjusting hyperparameter configurations, we propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training. Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN and SAC) in different kinds of environments (Cartpole and Hopper). This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.",
keywords = "Reinforcement learning, AutoML, Hyperparameter optimization",
author = "Aditya Mohan and Carolin Benjamins and Konrad Wienecke and Alexander Dockhorn and Marius Lindauer",
year = "2023",
month = jul,
day = "20",
doi = "10.48550/arXiv.2304.02396",
language = "English",
booktitle = "Second International Conference on Automated Machine Learning",

}

Download

TY - GEN

T1 - AutoRL Hyperparameter Landscapes

AU - Mohan, Aditya

AU - Benjamins, Carolin

AU - Wienecke, Konrad

AU - Dockhorn, Alexander

AU - Lindauer, Marius

PY - 2023/7/20

Y1 - 2023/7/20

N2 - Although Reinforcement Learning (RL) has shown to be capable of producing impressive results, its use is limited by the impact of its hyperparameters on performance. This often makes it difficult to achieve good results in practice. Automated RL (AutoRL) addresses this difficulty, yet little is known about the dynamics of the hyperparameter landscapes that hyperparameter optimization (HPO) methods traverse in search of optimal configurations. In view of existing AutoRL approaches dynamically adjusting hyperparameter configurations, we propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training. Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN and SAC) in different kinds of environments (Cartpole and Hopper). This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.

AB - Although Reinforcement Learning (RL) has shown to be capable of producing impressive results, its use is limited by the impact of its hyperparameters on performance. This often makes it difficult to achieve good results in practice. Automated RL (AutoRL) addresses this difficulty, yet little is known about the dynamics of the hyperparameter landscapes that hyperparameter optimization (HPO) methods traverse in search of optimal configurations. In view of existing AutoRL approaches dynamically adjusting hyperparameter configurations, we propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training. Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN and SAC) in different kinds of environments (Cartpole and Hopper). This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.

KW - Reinforcement learning

KW - AutoML

KW - Hyperparameter optimization

U2 - 10.48550/arXiv.2304.02396

DO - 10.48550/arXiv.2304.02396

M3 - Conference contribution

BT - Second International Conference on Automated Machine Learning

ER -

By the same author(s)