Details
Original language | English |
---|---|
Title of host publication | Studies in Computational Intelligence |
Place of Publication | Cham |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 181-198 |
Number of pages | 18 |
ISBN (electronic) | 978-3-031-25759-9 |
ISBN (print) | 978-3-031-25758-2 |
Publication status | Published - 18 Apr 2023 |
Publication series
Name | Studies in Computational Intelligence |
---|---|
Volume | 1087 |
ISSN (Print) | 1860-949X |
ISSN (electronic) | 1860-9503 |
Abstract
Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.
Keywords
- Action abstraction, Computational intelligence in games, Reinforcement learning, Search-based algorithms, State abstraction
ASJC Scopus subject areas
- Computer Science(all)
- Artificial Intelligence
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Studies in Computational Intelligence. Cham: Springer Science and Business Media Deutschland GmbH, 2023. p. 181-198 (Studies in Computational Intelligence; Vol. 1087).
Research output: Chapter in book/report/conference proceeding › Contribution to book/anthology › Research › peer review
}
TY - CHAP
T1 - State and Action Abstraction for Search and Reinforcement Learning Algorithms
AU - Dockhorn, Alexander
AU - Kruse, Rudolf
PY - 2023/4/18
Y1 - 2023/4/18
N2 - Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.
AB - Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.
KW - Action abstraction
KW - Computational intelligence in games
KW - Reinforcement learning
KW - Search-based algorithms
KW - State abstraction
UR - http://www.scopus.com/inward/record.url?scp=85153379039&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-25759-9_9
DO - 10.1007/978-3-031-25759-9_9
M3 - Contribution to book/anthology
AN - SCOPUS:85153379039
SN - 978-3-031-25758-2
T3 - Studies in Computational Intelligence
SP - 181
EP - 198
BT - Studies in Computational Intelligence
PB - Springer Science and Business Media Deutschland GmbH
CY - Cham
ER -