Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Studies in Computational Intelligence |
Erscheinungsort | Cham |
Herausgeber (Verlag) | Springer Science and Business Media Deutschland GmbH |
Seiten | 181-198 |
Seitenumfang | 18 |
ISBN (elektronisch) | 978-3-031-25759-9 |
ISBN (Print) | 978-3-031-25758-2 |
Publikationsstatus | Veröffentlicht - 18 Apr. 2023 |
Publikationsreihe
Name | Studies in Computational Intelligence |
---|---|
Band | 1087 |
ISSN (Print) | 1860-949X |
ISSN (elektronisch) | 1860-9503 |
Abstract
Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Artificial intelligence
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Studies in Computational Intelligence. Cham: Springer Science and Business Media Deutschland GmbH, 2023. S. 181-198 (Studies in Computational Intelligence; Band 1087).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Beitrag in Buch/Sammelwerk › Forschung › Peer-Review
}
TY - CHAP
T1 - State and Action Abstraction for Search and Reinforcement Learning Algorithms
AU - Dockhorn, Alexander
AU - Kruse, Rudolf
PY - 2023/4/18
Y1 - 2023/4/18
N2 - Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.
AB - Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.
KW - Action abstraction
KW - Computational intelligence in games
KW - Reinforcement learning
KW - Search-based algorithms
KW - State abstraction
UR - http://www.scopus.com/inward/record.url?scp=85153379039&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-25759-9_9
DO - 10.1007/978-3-031-25759-9_9
M3 - Contribution to book/anthology
AN - SCOPUS:85153379039
SN - 978-3-031-25758-2
T3 - Studies in Computational Intelligence
SP - 181
EP - 198
BT - Studies in Computational Intelligence
PB - Springer Science and Business Media Deutschland GmbH
CY - Cham
ER -