Automated Reinforcement Learning (AutoRL): A Survey and Open Problems

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Jack Parker-Holder
  • Raghu Rajan
  • Xingyou Song
  • André Biedenkapp
  • Yingjie Miao
  • Theresa Eimer
  • Baohe Zhang
  • Vu Nguyen
  • Roberto Calandra
  • Aleksandra Faust
  • Frank Hutter
  • Marius Lindauer

Research Organisations

External Research Organisations

  • University of Freiburg
  • University of Oxford
  • Google Research
  • Amazon Australia
  • Meta AI
  • Bosch Center for Artificial Intelligence (BCAI)
View graph of relations

Details

Original languageEnglish
Pages (from-to)517-568
Number of pages52
JournalJournal of Artificial Intelligence Research
Volume74
Issue number74
Publication statusPublished - 1 Jun 2022

Abstract

The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.

Cite this

Automated Reinforcement Learning (AutoRL): A Survey and Open Problems. / Parker-Holder, Jack; Rajan, Raghu; Song, Xingyou et al.
In: Journal of Artificial Intelligence Research, Vol. 74, No. 74, 01.06.2022, p. 517-568.

Research output: Contribution to journalArticleResearchpeer review

Parker-Holder, J, Rajan, R, Song, X, Biedenkapp, A, Miao, Y, Eimer, T, Zhang, B, Nguyen, V, Calandra, R, Faust, A, Hutter, F & Lindauer, M 2022, 'Automated Reinforcement Learning (AutoRL): A Survey and Open Problems', Journal of Artificial Intelligence Research, vol. 74, no. 74, pp. 517-568. https://doi.org/10.48550/arXiv.2201.03916, https://doi.org/10.1613/jair.1.13596
Parker-Holder, J., Rajan, R., Song, X., Biedenkapp, A., Miao, Y., Eimer, T., Zhang, B., Nguyen, V., Calandra, R., Faust, A., Hutter, F., & Lindauer, M. (2022). Automated Reinforcement Learning (AutoRL): A Survey and Open Problems. Journal of Artificial Intelligence Research, 74(74), 517-568. https://doi.org/10.48550/arXiv.2201.03916, https://doi.org/10.1613/jair.1.13596
Parker-Holder J, Rajan R, Song X, Biedenkapp A, Miao Y, Eimer T et al. Automated Reinforcement Learning (AutoRL): A Survey and Open Problems. Journal of Artificial Intelligence Research. 2022 Jun 1;74(74):517-568. doi: 10.48550/arXiv.2201.03916, 10.1613/jair.1.13596
Parker-Holder, Jack ; Rajan, Raghu ; Song, Xingyou et al. / Automated Reinforcement Learning (AutoRL) : A Survey and Open Problems. In: Journal of Artificial Intelligence Research. 2022 ; Vol. 74, No. 74. pp. 517-568.
Download
@article{9230900099774e9a93294a92b40e91db,
title = "Automated Reinforcement Learning (AutoRL): A Survey and Open Problems",
abstract = "The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.",
author = "Jack Parker-Holder and Raghu Rajan and Xingyou Song and Andr{\'e} Biedenkapp and Yingjie Miao and Theresa Eimer and Baohe Zhang and Vu Nguyen and Roberto Calandra and Aleksandra Faust and Frank Hutter and Marius Lindauer",
note = "Funding Information: We would like to thank Jie Tan for providing feedback on the survey, as well as Sagi Perel and Daniel Golovin for valuable discussions. Frank, Andr{\'e} and Raghu acknowledge Robert Bosch GmbH for financial support.",
year = "2022",
month = jun,
day = "1",
doi = "10.48550/arXiv.2201.03916",
language = "English",
volume = "74",
pages = "517--568",
journal = "Journal of Artificial Intelligence Research",
issn = "1076-9757",
publisher = "Morgan Kaufmann Publishers, Inc.",
number = "74",

}

Download

TY - JOUR

T1 - Automated Reinforcement Learning (AutoRL)

T2 - A Survey and Open Problems

AU - Parker-Holder, Jack

AU - Rajan, Raghu

AU - Song, Xingyou

AU - Biedenkapp, André

AU - Miao, Yingjie

AU - Eimer, Theresa

AU - Zhang, Baohe

AU - Nguyen, Vu

AU - Calandra, Roberto

AU - Faust, Aleksandra

AU - Hutter, Frank

AU - Lindauer, Marius

N1 - Funding Information: We would like to thank Jie Tan for providing feedback on the survey, as well as Sagi Perel and Daniel Golovin for valuable discussions. Frank, André and Raghu acknowledge Robert Bosch GmbH for financial support.

PY - 2022/6/1

Y1 - 2022/6/1

N2 - The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.

AB - The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.

UR - http://www.scopus.com/inward/record.url?scp=85131022309&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2201.03916

DO - 10.48550/arXiv.2201.03916

M3 - Article

VL - 74

SP - 517

EP - 568

JO - Journal of Artificial Intelligence Research

JF - Journal of Artificial Intelligence Research

SN - 1076-9757

IS - 74

ER -

By the same author(s)