State and Action Abstraction for Search and Reinforcement Learning Algorithms

Research output: Chapter in book/report/conference proceedingContribution to book/anthologyResearchpeer review

Authors

Research Organisations

External Research Organisations

  • Otto-von-Guericke University Magdeburg
View graph of relations

Details

Original languageEnglish
Title of host publicationStudies in Computational Intelligence
Place of PublicationCham
PublisherSpringer Science and Business Media Deutschland GmbH
Pages181-198
Number of pages18
ISBN (electronic)978-3-031-25759-9
ISBN (print)978-3-031-25758-2
Publication statusPublished - 18 Apr 2023

Publication series

NameStudies in Computational Intelligence
Volume1087
ISSN (Print)1860-949X
ISSN (electronic)1860-9503

Abstract

Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.

Keywords

    Action abstraction, Computational intelligence in games, Reinforcement learning, Search-based algorithms, State abstraction

ASJC Scopus subject areas

Cite this

State and Action Abstraction for Search and Reinforcement Learning Algorithms. / Dockhorn, Alexander; Kruse, Rudolf.
Studies in Computational Intelligence. Cham: Springer Science and Business Media Deutschland GmbH, 2023. p. 181-198 (Studies in Computational Intelligence; Vol. 1087).

Research output: Chapter in book/report/conference proceedingContribution to book/anthologyResearchpeer review

Dockhorn, A & Kruse, R 2023, State and Action Abstraction for Search and Reinforcement Learning Algorithms. in Studies in Computational Intelligence. Studies in Computational Intelligence, vol. 1087, Springer Science and Business Media Deutschland GmbH, Cham, pp. 181-198. https://doi.org/10.1007/978-3-031-25759-9_9
Dockhorn, A., & Kruse, R. (2023). State and Action Abstraction for Search and Reinforcement Learning Algorithms. In Studies in Computational Intelligence (pp. 181-198). (Studies in Computational Intelligence; Vol. 1087). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-25759-9_9
Dockhorn A, Kruse R. State and Action Abstraction for Search and Reinforcement Learning Algorithms. In Studies in Computational Intelligence. Cham: Springer Science and Business Media Deutschland GmbH. 2023. p. 181-198. (Studies in Computational Intelligence). doi: 10.1007/978-3-031-25759-9_9
Dockhorn, Alexander ; Kruse, Rudolf. / State and Action Abstraction for Search and Reinforcement Learning Algorithms. Studies in Computational Intelligence. Cham : Springer Science and Business Media Deutschland GmbH, 2023. pp. 181-198 (Studies in Computational Intelligence).
Download
@inbook{47121ec5542a4cb18d0cde1b2ed0ac2d,
title = "State and Action Abstraction for Search and Reinforcement Learning Algorithms",
abstract = "Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.",
keywords = "Action abstraction, Computational intelligence in games, Reinforcement learning, Search-based algorithms, State abstraction",
author = "Alexander Dockhorn and Rudolf Kruse",
year = "2023",
month = apr,
day = "18",
doi = "10.1007/978-3-031-25759-9_9",
language = "English",
isbn = "978-3-031-25758-2",
series = "Studies in Computational Intelligence",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "181--198",
booktitle = "Studies in Computational Intelligence",
address = "Germany",

}

Download

TY - CHAP

T1 - State and Action Abstraction for Search and Reinforcement Learning Algorithms

AU - Dockhorn, Alexander

AU - Kruse, Rudolf

PY - 2023/4/18

Y1 - 2023/4/18

N2 - Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.

AB - Decision-making in large and dynamic environments has always been a challenge for AI agents. Given the multitude of available sensors in robotics and the rising complexity of simulated environments, agents have access to plenty of data but need to carefully focus their attention if they want to be successful. While action abstractions reduce the complexity by concentrating on a feasible subset of actions, state abstractions enable the agent to better transfer its knowledge from similar situations. In this article, we want to identify the different techniques for learning and using state and action abstractions and compare their effect on an agent's training and its resulting behavior.

KW - Action abstraction

KW - Computational intelligence in games

KW - Reinforcement learning

KW - Search-based algorithms

KW - State abstraction

UR - http://www.scopus.com/inward/record.url?scp=85153379039&partnerID=8YFLogxK

U2 - 10.1007/978-3-031-25759-9_9

DO - 10.1007/978-3-031-25759-9_9

M3 - Contribution to book/anthology

AN - SCOPUS:85153379039

SN - 978-3-031-25758-2

T3 - Studies in Computational Intelligence

SP - 181

EP - 198

BT - Studies in Computational Intelligence

PB - Springer Science and Business Media Deutschland GmbH

CY - Cham

ER -

By the same author(s)