Details
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 517-568 |
Seitenumfang | 52 |
Fachzeitschrift | Journal of Artificial Intelligence Research |
Jahrgang | 74 |
Ausgabenummer | 74 |
Publikationsstatus | Veröffentlicht - 1 Juni 2022 |
Abstract
The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: Journal of Artificial Intelligence Research, Jahrgang 74, Nr. 74, 01.06.2022, S. 517-568.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - Automated Reinforcement Learning (AutoRL)
T2 - A Survey and Open Problems
AU - Parker-Holder, Jack
AU - Rajan, Raghu
AU - Song, Xingyou
AU - Biedenkapp, André
AU - Miao, Yingjie
AU - Eimer, Theresa
AU - Zhang, Baohe
AU - Nguyen, Vu
AU - Calandra, Roberto
AU - Faust, Aleksandra
AU - Hutter, Frank
AU - Lindauer, Marius
N1 - Funding Information: We would like to thank Jie Tan for providing feedback on the survey, as well as Sagi Perel and Daniel Golovin for valuable discussions. Frank, André and Raghu acknowledge Robert Bosch GmbH for financial support.
PY - 2022/6/1
Y1 - 2022/6/1
N2 - The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.
AB - The combination of Reinforcement Learning (RL) with deep learning has led to a series of impressive feats, with many believing (deep) RL provides a path towards generally capable agents. However, the success of RL agents is often highly sensitive to design choices in the training process, which may require tedious and error-prone manual tuning. This makes it challenging to use RL for new problems and also limits its full potential. In many other areas of machine learning, AutoML has shown that it is possible to automate such design choices, and AutoML has also yielded promising initial results when applied to RL. However, Automated Reinforcement Learning (AutoRL) involves not only standard applications of AutoML but also includes additional challenges unique to RL, that naturally produce a different set of methods. As such, AutoRL has been emerging as an important area of research in RL, providing promise in a variety of applications from RNA design to playing games, such as Go. Given the diversity of methods and environments considered in RL, much of the research has been conducted in distinct subfields, ranging from meta-learning to evolution. In this survey, we seek to unify the field of AutoRL, provide a common taxonomy, discuss each area in detail and pose open problems of interest to researchers going forward.
UR - http://www.scopus.com/inward/record.url?scp=85131022309&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2201.03916
DO - 10.48550/arXiv.2201.03916
M3 - Article
VL - 74
SP - 517
EP - 568
JO - Journal of Artificial Intelligence Research
JF - Journal of Artificial Intelligence Research
SN - 1076-9757
IS - 74
ER -