Details
Original language | English |
---|---|
Number of pages | 6 |
Publication status | E-pub ahead of print - 20 Jul 2023 |
Event | European Workshop on Reinforcement Learning 2023 - Brüssel Duration: 13 Sept 2023 → 16 Sept 2023 https://ewrl.wordpress.com/ewrl16-2023/ |
Workshop
Workshop | European Workshop on Reinforcement Learning 2023 |
---|---|
City | Brüssel |
Period | 13 Sept 2023 → 16 Sept 2023 |
Internet address |
Abstract
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2023. Abstract from European Workshop on Reinforcement Learning 2023, Brüssel.
Research output: Contribution to conference › Abstract › Research › peer review
}
TY - CONF
T1 - Extended Abstract
T2 - European Workshop on Reinforcement Learning 2023
AU - Mohan, Aditya
AU - Benjamins, Carolin
AU - Wienecke, Konrad
AU - Dockhorn, Alexander
AU - Lindauer, Marius
PY - 2023/7/20
Y1 - 2023/7/20
N2 - Although Reinforcement Learning (RL) has shown to be capable of producing impressive results, its use is limited by the impact of its hyperparameters on performance. This often makes it difficult to achieve good results in practice. Automated RL (AutoRL) addresses this difficulty, yet little is known about the dynamics of the hyperparameter landscapes that hyperparameter optimization (HPO) methods traverse in search of optimal configurations. In view of existing AutoRL approaches dynamically adjusting hyperparameter configurations, we propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training. Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN, PPO, and SAC) in different kinds of environments (Cartpole, Bipedal Walker, and Hopper). This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analysis.
AB - Although Reinforcement Learning (RL) has shown to be capable of producing impressive results, its use is limited by the impact of its hyperparameters on performance. This often makes it difficult to achieve good results in practice. Automated RL (AutoRL) addresses this difficulty, yet little is known about the dynamics of the hyperparameter landscapes that hyperparameter optimization (HPO) methods traverse in search of optimal configurations. In view of existing AutoRL approaches dynamically adjusting hyperparameter configurations, we propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training. Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN, PPO, and SAC) in different kinds of environments (Cartpole, Bipedal Walker, and Hopper). This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analysis.
M3 - Abstract
Y2 - 13 September 2023 through 16 September 2023
ER -