State and Action Abstraction for Search and Reinforcement Learning Algorithms

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandBeitrag in Buch/SammelwerkForschungPeer-Review

Autoren

Externe Organisationen

  • Otto-von-Guericke-Universität Magdeburg
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksStudies in Computational Intelligence
ErscheinungsortCham
Herausgeber (Verlag)Springer Science and Business Media Deutschland GmbH
Seiten181-198
Seitenumfang18
ISBN (elektronisch)978-3-031-25759-9
ISBN (Print)978-3-031-25758-2
PublikationsstatusVeröffentlicht - 18 Apr. 2023

Publikationsreihe

NameStudies in Computational Intelligence
Band1087
ISSN (Print)1860-949X
ISSN (elektronisch)1860-9503

Abstract

Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.

ASJC Scopus Sachgebiete

Zitieren

State and Action Abstraction for Search and Reinforcement Learning Algorithms. / Dockhorn, Alexander; Kruse, Rudolf.
Studies in Computational Intelligence. Cham: Springer Science and Business Media Deutschland GmbH, 2023. S. 181-198 (Studies in Computational Intelligence; Band 1087).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandBeitrag in Buch/SammelwerkForschungPeer-Review

Dockhorn, A & Kruse, R 2023, State and Action Abstraction for Search and Reinforcement Learning Algorithms. in Studies in Computational Intelligence. Studies in Computational Intelligence, Bd. 1087, Springer Science and Business Media Deutschland GmbH, Cham, S. 181-198. https://doi.org/10.1007/978-3-031-25759-9_9
Dockhorn, A., & Kruse, R. (2023). State and Action Abstraction for Search and Reinforcement Learning Algorithms. In Studies in Computational Intelligence (S. 181-198). (Studies in Computational Intelligence; Band 1087). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-25759-9_9
Dockhorn A, Kruse R. State and Action Abstraction for Search and Reinforcement Learning Algorithms. in Studies in Computational Intelligence. Cham: Springer Science and Business Media Deutschland GmbH. 2023. S. 181-198. (Studies in Computational Intelligence). doi: 10.1007/978-3-031-25759-9_9
Dockhorn, Alexander ; Kruse, Rudolf. / State and Action Abstraction for Search and Reinforcement Learning Algorithms. Studies in Computational Intelligence. Cham : Springer Science and Business Media Deutschland GmbH, 2023. S. 181-198 (Studies in Computational Intelligence).
Download
@inbook{47121ec5542a4cb18d0cde1b2ed0ac2d,
title = "State and Action Abstraction for Search and Reinforcement Learning Algorithms",
abstract = "Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.",
keywords = "Action abstraction, Computational intelligence in games, Reinforcement learning, Search-based algorithms, State abstraction",
author = "Alexander Dockhorn and Rudolf Kruse",
year = "2023",
month = apr,
day = "18",
doi = "10.1007/978-3-031-25759-9_9",
language = "English",
isbn = "978-3-031-25758-2",
series = "Studies in Computational Intelligence",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "181--198",
booktitle = "Studies in Computational Intelligence",
address = "Germany",

}

Download

TY - CHAP

T1 - State and Action Abstraction for Search and Reinforcement Learning Algorithms

AU - Dockhorn, Alexander

AU - Kruse, Rudolf

PY - 2023/4/18

Y1 - 2023/4/18

N2 - Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.

AB - Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.

KW - Action abstraction

KW - Computational intelligence in games

KW - Reinforcement learning

KW - Search-based algorithms

KW - State abstraction

UR - http://www.scopus.com/inward/record.url?scp=85153379039&partnerID=8YFLogxK

U2 - 10.1007/978-3-031-25759-9_9

DO - 10.1007/978-3-031-25759-9_9

M3 - Contribution to book/anthology

AN - SCOPUS:85153379039

SN - 978-3-031-25758-2

T3 - Studies in Computational Intelligence

SP - 181

EP - 198

BT - Studies in Computational Intelligence

PB - Springer Science and Business Media Deutschland GmbH

CY - Cham

ER -

Von denselben Autoren