Contextualize Me – The Case for Context in Reinforcement Learning

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

Externe Organisationen

  • Albert-Ludwigs-Universität Freiburg
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
FachzeitschriftTransactions on Machine Learning Research
Jahrgang2023
Ausgabenummer6
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 5 Juni 2023

Abstract

While Reinforcement Learning ( RL) has made great strides towards solving increasingly complicated problems, many algorithms are still brittle to even slight environmental changes. Contextual Reinforcement Learning (cRL) provides a framework to model such changes in a principled manner, thereby enabling flexible, precise and interpretable task specification and generation. Our goal is to show how the framework of cRL contributes to improving zero-shot generalization in RL through meaningful benchmarks and structured reasoning
about generalization tasks. We confirm the insight that optimal behavior in cRL requires context information, as in other related areas of partial observability. To empirically validate this in the cRL framework, we provide various context-extended versions of common RL environments. They are part of the first benchmark library, CARL, designed for generalization based on cRL extensions of popular benchmarks, which we propose as a testbed to further study general agents. We show that in the contextual setting, even simple RL environments become challenging - and that naive solutions are not enough to generalize across complex context spaces.

Zitieren

Contextualize Me – The Case for Context in Reinforcement Learning. / Benjamins, Carolin; Eimer, Theresa; Schubert, Frederik Günter et al.
in: Transactions on Machine Learning Research, Jahrgang 2023, Nr. 6, 05.06.2023.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Benjamins, C, Eimer, T, Schubert, FG, Mohan, A, Döhler, S, Biedenkapp, A, Rosenhahn, B, Hutter, F & Lindauer, M 2023, 'Contextualize Me – The Case for Context in Reinforcement Learning', Transactions on Machine Learning Research, Jg. 2023, Nr. 6. https://doi.org/10.48550/arXiv.2202.04500
Benjamins, C., Eimer, T., Schubert, F. G., Mohan, A., Döhler, S., Biedenkapp, A., Rosenhahn, B., Hutter, F., & Lindauer, M. (2023). Contextualize Me – The Case for Context in Reinforcement Learning. Transactions on Machine Learning Research, 2023(6). Vorabveröffentlichung online. https://doi.org/10.48550/arXiv.2202.04500
Benjamins C, Eimer T, Schubert FG, Mohan A, Döhler S, Biedenkapp A et al. Contextualize Me – The Case for Context in Reinforcement Learning. Transactions on Machine Learning Research. 2023 Jun 5;2023(6). Epub 2023 Jun 5. doi: 10.48550/arXiv.2202.04500
Benjamins, Carolin ; Eimer, Theresa ; Schubert, Frederik Günter et al. / Contextualize Me – The Case for Context in Reinforcement Learning. in: Transactions on Machine Learning Research. 2023 ; Jahrgang 2023, Nr. 6.
Download
@article{c9671066ca224d36b8f597fab5800076,
title = "Contextualize Me – The Case for Context in Reinforcement Learning",
abstract = " While Reinforcement Learning ( RL) has made great strides towards solving increasingly complicated problems, many algorithms are still brittle to even slight environmental changes. Contextual Reinforcement Learning (cRL) provides a framework to model such changes in a principled manner, thereby enabling flexible, precise and interpretable task specification and generation. Our goal is to show how the framework of cRL contributes to improving zero-shot generalization in RL through meaningful benchmarks and structured reasoning about generalization tasks. We confirm the insight that optimal behavior in cRL requires context information, as in other related areas of partial observability. To empirically validate this in the cRL framework, we provide various context-extended versions of common RL environments. They are part of the first benchmark library, CARL, designed for generalization based on cRL extensions of popular benchmarks, which we propose as a testbed to further study general agents. We show that in the contextual setting, even simple RL environments become challenging - and that naive solutions are not enough to generalize across complex context spaces. ",
keywords = "contextual RL, cMDP, generalization",
author = "Carolin Benjamins and Theresa Eimer and Schubert, {Frederik G{\"u}nter} and Aditya Mohan and Sebastian D{\"o}hler and Andr{\'e} Biedenkapp and Bodo Rosenhahn and Frank Hutter and Marius Lindauer",
year = "2023",
month = jun,
day = "5",
doi = "10.48550/arXiv.2202.04500",
language = "English",
volume = "2023",
number = "6",

}

Download

TY - JOUR

T1 - Contextualize Me – The Case for Context in Reinforcement Learning

AU - Benjamins, Carolin

AU - Eimer, Theresa

AU - Schubert, Frederik Günter

AU - Mohan, Aditya

AU - Döhler, Sebastian

AU - Biedenkapp, André

AU - Rosenhahn, Bodo

AU - Hutter, Frank

AU - Lindauer, Marius

PY - 2023/6/5

Y1 - 2023/6/5

N2 - While Reinforcement Learning ( RL) has made great strides towards solving increasingly complicated problems, many algorithms are still brittle to even slight environmental changes. Contextual Reinforcement Learning (cRL) provides a framework to model such changes in a principled manner, thereby enabling flexible, precise and interpretable task specification and generation. Our goal is to show how the framework of cRL contributes to improving zero-shot generalization in RL through meaningful benchmarks and structured reasoning about generalization tasks. We confirm the insight that optimal behavior in cRL requires context information, as in other related areas of partial observability. To empirically validate this in the cRL framework, we provide various context-extended versions of common RL environments. They are part of the first benchmark library, CARL, designed for generalization based on cRL extensions of popular benchmarks, which we propose as a testbed to further study general agents. We show that in the contextual setting, even simple RL environments become challenging - and that naive solutions are not enough to generalize across complex context spaces.

AB - While Reinforcement Learning ( RL) has made great strides towards solving increasingly complicated problems, many algorithms are still brittle to even slight environmental changes. Contextual Reinforcement Learning (cRL) provides a framework to model such changes in a principled manner, thereby enabling flexible, precise and interpretable task specification and generation. Our goal is to show how the framework of cRL contributes to improving zero-shot generalization in RL through meaningful benchmarks and structured reasoning about generalization tasks. We confirm the insight that optimal behavior in cRL requires context information, as in other related areas of partial observability. To empirically validate this in the cRL framework, we provide various context-extended versions of common RL environments. They are part of the first benchmark library, CARL, designed for generalization based on cRL extensions of popular benchmarks, which we propose as a testbed to further study general agents. We show that in the contextual setting, even simple RL environments become challenging - and that naive solutions are not enough to generalize across complex context spaces.

KW - contextual RL

KW - cMDP

KW - generalization

U2 - 10.48550/arXiv.2202.04500

DO - 10.48550/arXiv.2202.04500

M3 - Article

VL - 2023

JO - Transactions on Machine Learning Research

JF - Transactions on Machine Learning Research

SN - 2835-8856

IS - 6

ER -

Von denselben Autoren