Automated Reinforcement Learning (AutoRL): A Survey and Open Problems

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Jack Parker-Holder
  • Raghu Rajan
  • Xingyou Song
  • André Biedenkapp
  • Yingjie Miao
  • Theresa Eimer
  • Baohe Zhang
  • Vu Nguyen
  • Roberto Calandra
  • Aleksandra Faust
  • Frank Hutter
  • Marius Lindauer

Organisationseinheiten

Externe Organisationen

  • Albert-Ludwigs-Universität Freiburg
  • University of Oxford
  • Google Research
  • Amazon.com, Inc.
  • Meta AI
  • Bosch Center for Artificial Intelligence (BCAI)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)517-568
Seitenumfang52
FachzeitschriftJournal of Artificial Intelligence Research
Jahrgang74
Ausgabenummer74
PublikationsstatusVeröffentlicht - 1 Juni 2022

Abstract

The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.

Zitieren

Automated Reinforcement Learning (AutoRL): A Survey and Open Problems. / Parker-Holder, Jack; Rajan, Raghu; Song, Xingyou et al.
in: Journal of Artificial Intelligence Research, Jahrgang 74, Nr. 74, 01.06.2022, S. 517-568.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Parker-Holder, J, Rajan, R, Song, X, Biedenkapp, A, Miao, Y, Eimer, T, Zhang, B, Nguyen, V, Calandra, R, Faust, A, Hutter, F & Lindauer, M 2022, 'Automated Reinforcement Learning (AutoRL): A Survey and Open Problems', Journal of Artificial Intelligence Research, Jg. 74, Nr. 74, S. 517-568. https://doi.org/10.48550/arXiv.2201.03916, https://doi.org/10.1613/jair.1.13596
Parker-Holder, J., Rajan, R., Song, X., Biedenkapp, A., Miao, Y., Eimer, T., Zhang, B., Nguyen, V., Calandra, R., Faust, A., Hutter, F., & Lindauer, M. (2022). Automated Reinforcement Learning (AutoRL): A Survey and Open Problems. Journal of Artificial Intelligence Research, 74(74), 517-568. https://doi.org/10.48550/arXiv.2201.03916, https://doi.org/10.1613/jair.1.13596
Parker-Holder J, Rajan R, Song X, Biedenkapp A, Miao Y, Eimer T et al. Automated Reinforcement Learning (AutoRL): A Survey and Open Problems. Journal of Artificial Intelligence Research. 2022 Jun 1;74(74):517-568. doi: 10.48550/arXiv.2201.03916, 10.1613/jair.1.13596
Parker-Holder, Jack ; Rajan, Raghu ; Song, Xingyou et al. / Automated Reinforcement Learning (AutoRL) : A Survey and Open Problems. in: Journal of Artificial Intelligence Research. 2022 ; Jahrgang 74, Nr. 74. S. 517-568.
Download
@article{9230900099774e9a93294a92b40e91db,
title = "Automated Reinforcement Learning (AutoRL): A Survey and Open Problems",
abstract = "The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.",
author = "Jack Parker-Holder and Raghu Rajan and Xingyou Song and Andr{\'e} Biedenkapp and Yingjie Miao and Theresa Eimer and Baohe Zhang and Vu Nguyen and Roberto Calandra and Aleksandra Faust and Frank Hutter and Marius Lindauer",
note = "Funding Information: We would like to thank Jie Tan for providing feedback on the survey, as well as Sagi Perel and Daniel Golovin for valuable discussions. Frank, Andr{\'e} and Raghu acknowledge Robert Bosch GmbH for financial support.",
year = "2022",
month = jun,
day = "1",
doi = "10.48550/arXiv.2201.03916",
language = "English",
volume = "74",
pages = "517--568",
journal = "Journal of Artificial Intelligence Research",
issn = "1076-9757",
publisher = "Morgan Kaufmann Publishers, Inc.",
number = "74",

}

Download

TY - JOUR

T1 - Automated Reinforcement Learning (AutoRL)

T2 - A Survey and Open Problems

AU - Parker-Holder, Jack

AU - Rajan, Raghu

AU - Song, Xingyou

AU - Biedenkapp, André

AU - Miao, Yingjie

AU - Eimer, Theresa

AU - Zhang, Baohe

AU - Nguyen, Vu

AU - Calandra, Roberto

AU - Faust, Aleksandra

AU - Hutter, Frank

AU - Lindauer, Marius

N1 - Funding Information: We would like to thank Jie Tan for providing feedback on the survey, as well as Sagi Perel and Daniel Golovin for valuable discussions. Frank, André and Raghu acknowledge Robert Bosch GmbH for financial support.

PY - 2022/6/1

Y1 - 2022/6/1

N2 - The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.

AB - The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.

UR - http://www.scopus.com/inward/record.url?scp=85131022309&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2201.03916

DO - 10.48550/arXiv.2201.03916

M3 - Article

VL - 74

SP - 517

EP - 568

JO - Journal of Artificial Intelligence Research

JF - Journal of Artificial Intelligence Research

SN - 1076-9757

IS - 74

ER -

Von denselben Autoren