CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Externe Organisationen

  • Albert-Ludwigs-Universität Freiburg
  • Bosch Center for Artificial Intelligence (BCAI)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksWorkshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021
Seitenumfang20
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 5 Okt. 2021

Abstract

While Reinforcement Learning has made great strides towards solving ever more complicated tasks, many algorithms are still brittle to even slight changes in their environment. This is a limiting factor for real-world applications of RL. Although the research community continuously aims at improving both robustness and generalization of RL algorithms, unfortunately it still lacks an open-source set of well-defined benchmark problems based on a consistent theoretical framework, which allows comparing different approaches in a fair, reliable and reproducibleway. To fill this gap, we propose CARL, a collection of well-known RL environments extended to contextual RL problems to study generalization. We show the urgent need of such benchmarks by demonstrating that even simple toy environments become challenging for commonly used approaches if different contextual instances of this task have to be considered. Furthermore, CARL allows us to provide first evidence that disentangling representation learning of the states from the policy learning with the context facilitates better generalization. By providing variations of diverse benchmarks from classic control, physical simulations, games and a real-world application of RNA design, CARL will allow the community to derive many more such insights on a solid empirical foundation.

Zitieren

CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning. / Benjamins, Carolin; Eimer, Theresa; Schubert, Frederik et al.
Workshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021. 2021.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Benjamins, C, Eimer, T, Schubert, F, Biedenkapp, A, Rosenhahn, B, Hutter, F & Lindauer, M 2021, CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning. in Workshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021. <https://arxiv.org/abs/2110.02102>
Benjamins, C., Eimer, T., Schubert, F., Biedenkapp, A., Rosenhahn, B., Hutter, F., & Lindauer, M. (2021). CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning. In Workshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021 Vorabveröffentlichung online. https://arxiv.org/abs/2110.02102
Benjamins C, Eimer T, Schubert F, Biedenkapp A, Rosenhahn B, Hutter F et al. CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning. in Workshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021. 2021 Epub 2021 Okt 5.
Benjamins, Carolin ; Eimer, Theresa ; Schubert, Frederik et al. / CARL : A Benchmark for Contextual and Adaptive Reinforcement Learning. Workshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021. 2021.
Download
@inproceedings{f012f5a3694f4976934f260faba6d7f7,
title = "CARL: A Benchmark for Contextual and Adaptive Reinforcement Learning",
abstract = " While Reinforcement Learning has made great strides towards solving ever more complicated tasks, many algorithms are still brittle to even slight changes in their environment. This is a limiting factor for real-world applications of RL. Although the research community continuously aims at improving both robustness and generalization of RL algorithms, unfortunately it still lacks an open-source set of well-defined benchmark problems based on a consistent theoretical framework, which allows comparing different approaches in a fair, reliable and reproducibleway. To fill this gap, we propose CARL, a collection of well-known RL environments extended to contextual RL problems to study generalization. We show the urgent need of such benchmarks by demonstrating that even simple toy environments become challenging for commonly used approaches if different contextual instances of this task have to be considered. Furthermore, CARL allows us to provide first evidence that disentangling representation learning of the states from the policy learning with the context facilitates better generalization. By providing variations of diverse benchmarks from classic control, physical simulations, games and a real-world application of RNA design, CARL will allow the community to derive many more such insights on a solid empirical foundation. ",
keywords = "cs.LG",
author = "Carolin Benjamins and Theresa Eimer and Frederik Schubert and Andr{\'e} Biedenkapp and Bodo Rosenhahn and Frank Hutter and Marius Lindauer",
year = "2021",
month = oct,
day = "5",
language = "English",
booktitle = "Workshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021",

}

Download

TY - GEN

T1 - CARL

T2 - A Benchmark for Contextual and Adaptive Reinforcement Learning

AU - Benjamins, Carolin

AU - Eimer, Theresa

AU - Schubert, Frederik

AU - Biedenkapp, André

AU - Rosenhahn, Bodo

AU - Hutter, Frank

AU - Lindauer, Marius

PY - 2021/10/5

Y1 - 2021/10/5

N2 - While Reinforcement Learning has made great strides towards solving ever more complicated tasks, many algorithms are still brittle to even slight changes in their environment. This is a limiting factor for real-world applications of RL. Although the research community continuously aims at improving both robustness and generalization of RL algorithms, unfortunately it still lacks an open-source set of well-defined benchmark problems based on a consistent theoretical framework, which allows comparing different approaches in a fair, reliable and reproducibleway. To fill this gap, we propose CARL, a collection of well-known RL environments extended to contextual RL problems to study generalization. We show the urgent need of such benchmarks by demonstrating that even simple toy environments become challenging for commonly used approaches if different contextual instances of this task have to be considered. Furthermore, CARL allows us to provide first evidence that disentangling representation learning of the states from the policy learning with the context facilitates better generalization. By providing variations of diverse benchmarks from classic control, physical simulations, games and a real-world application of RNA design, CARL will allow the community to derive many more such insights on a solid empirical foundation.

AB - While Reinforcement Learning has made great strides towards solving ever more complicated tasks, many algorithms are still brittle to even slight changes in their environment. This is a limiting factor for real-world applications of RL. Although the research community continuously aims at improving both robustness and generalization of RL algorithms, unfortunately it still lacks an open-source set of well-defined benchmark problems based on a consistent theoretical framework, which allows comparing different approaches in a fair, reliable and reproducibleway. To fill this gap, we propose CARL, a collection of well-known RL environments extended to contextual RL problems to study generalization. We show the urgent need of such benchmarks by demonstrating that even simple toy environments become challenging for commonly used approaches if different contextual instances of this task have to be considered. Furthermore, CARL allows us to provide first evidence that disentangling representation learning of the states from the policy learning with the context facilitates better generalization. By providing variations of diverse benchmarks from classic control, physical simulations, games and a real-world application of RNA design, CARL will allow the community to derive many more such insights on a solid empirical foundation.

KW - cs.LG

M3 - Conference contribution

BT - Workshop on Ecological Theory of Reinforcement Learning, NeurIPS 2021

ER -

Von denselben Autoren