Forward Model Approximation for General Video Game Learning

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Externe Organisationen

  • Otto-von-Guericke-Universität Magdeburg
  • Z Quadrat GmbH
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings of the 2018 IEEE Conference on Computational Intelligence and Games, CIG 2018
Herausgeber (Verlag)IEEE Computer Society
ISBN (elektronisch)9781538643594
PublikationsstatusVeröffentlicht - 11 Okt. 2018
Extern publiziertJa
Veranstaltung14th IEEE Conference on Computational Intelligence and Games, CIG 2018 - Maastricht, Niederlande
Dauer: 14 Aug. 201817 Aug. 2018

Publikationsreihe

NameIEEE Conference on Computatonal Intelligence and Games, CIG
Band2018-August
ISSN (Print)2325-4270
ISSN (elektronisch)2325-4289

Abstract

This paper proposes a novel learning agent model for a General Video Game Playing agent. Our agent learns an approximation of the forward model from repeatedly playing a game and subsequently adapting its behavior to previously unseen levels. To achieve this, it first learns the game mechanics through machine learning techniques and then extracts rule-based symbolic knowledge on different levels of abstraction. When being confronted with new levels of a game, the agent is able to revise its knowledge by a novel belief revision approach. Using methods such as Monte Carlo Tree Search and Breadth First Search, it searches for the best possible action using simulated game episodes. Those simulations are only possible due to reasoning about future states using the extracted rule-based knowledge from random episodes during the learning phase. The developed agent outperforms previous agents by a large margin, while still being limited in its prediction capabilities. The proposed forward model approximation opens a new class of solutions in the context of General Video Game Playing, which do not try to learn a value function, but try to increase their accuracy in modelling the game.

ASJC Scopus Sachgebiete

Zitieren

Forward Model Approximation for General Video Game Learning. / Dockhorn, Alexander; Apeldoorn, Daan.
Proceedings of the 2018 IEEE Conference on Computational Intelligence and Games, CIG 2018. IEEE Computer Society, 2018. 8490411 (IEEE Conference on Computatonal Intelligence and Games, CIG; Band 2018-August).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Dockhorn, A & Apeldoorn, D 2018, Forward Model Approximation for General Video Game Learning. in Proceedings of the 2018 IEEE Conference on Computational Intelligence and Games, CIG 2018., 8490411, IEEE Conference on Computatonal Intelligence and Games, CIG, Bd. 2018-August, IEEE Computer Society, 14th IEEE Conference on Computational Intelligence and Games, CIG 2018, Maastricht, Niederlande, 14 Aug. 2018. https://doi.org/10.1109/CIG.2018.8490411
Dockhorn, A., & Apeldoorn, D. (2018). Forward Model Approximation for General Video Game Learning. In Proceedings of the 2018 IEEE Conference on Computational Intelligence and Games, CIG 2018 Artikel 8490411 (IEEE Conference on Computatonal Intelligence and Games, CIG; Band 2018-August). IEEE Computer Society. https://doi.org/10.1109/CIG.2018.8490411
Dockhorn A, Apeldoorn D. Forward Model Approximation for General Video Game Learning. in Proceedings of the 2018 IEEE Conference on Computational Intelligence and Games, CIG 2018. IEEE Computer Society. 2018. 8490411. (IEEE Conference on Computatonal Intelligence and Games, CIG). doi: 10.1109/CIG.2018.8490411
Dockhorn, Alexander ; Apeldoorn, Daan. / Forward Model Approximation for General Video Game Learning. Proceedings of the 2018 IEEE Conference on Computational Intelligence and Games, CIG 2018. IEEE Computer Society, 2018. (IEEE Conference on Computatonal Intelligence and Games, CIG).
Download
@inproceedings{0c4f7b37873a4ff784d896958cb4bc99,
title = "Forward Model Approximation for General Video Game Learning",
abstract = "This paper proposes a novel learning agent model for a General Video Game Playing agent. Our agent learns an approximation of the forward model from repeatedly playing a game and subsequently adapting its behavior to previously unseen levels. To achieve this, it first learns the game mechanics through machine learning techniques and then extracts rule-based symbolic knowledge on different levels of abstraction. When being confronted with new levels of a game, the agent is able to revise its knowledge by a novel belief revision approach. Using methods such as Monte Carlo Tree Search and Breadth First Search, it searches for the best possible action using simulated game episodes. Those simulations are only possible due to reasoning about future states using the extracted rule-based knowledge from random episodes during the learning phase. The developed agent outperforms previous agents by a large margin, while still being limited in its prediction capabilities. The proposed forward model approximation opens a new class of solutions in the context of General Video Game Playing, which do not try to learn a value function, but try to increase their accuracy in modelling the game.",
keywords = "Belief Revision, Breadth First Search, Exception-tolerant Hierarchical Knowledge Bases, Forward Model Approximation, General Video Games, Monte Carlo Tree Search",
author = "Alexander Dockhorn and Daan Apeldoorn",
year = "2018",
month = oct,
day = "11",
doi = "10.1109/CIG.2018.8490411",
language = "English",
series = "IEEE Conference on Computatonal Intelligence and Games, CIG",
publisher = "IEEE Computer Society",
booktitle = "Proceedings of the 2018 IEEE Conference on Computational Intelligence and Games, CIG 2018",
address = "United States",
note = "14th IEEE Conference on Computational Intelligence and Games, CIG 2018 ; Conference date: 14-08-2018 Through 17-08-2018",

}

Download

TY - GEN

T1 - Forward Model Approximation for General Video Game Learning

AU - Dockhorn, Alexander

AU - Apeldoorn, Daan

PY - 2018/10/11

Y1 - 2018/10/11

N2 - This paper proposes a novel learning agent model for a General Video Game Playing agent. Our agent learns an approximation of the forward model from repeatedly playing a game and subsequently adapting its behavior to previously unseen levels. To achieve this, it first learns the game mechanics through machine learning techniques and then extracts rule-based symbolic knowledge on different levels of abstraction. When being confronted with new levels of a game, the agent is able to revise its knowledge by a novel belief revision approach. Using methods such as Monte Carlo Tree Search and Breadth First Search, it searches for the best possible action using simulated game episodes. Those simulations are only possible due to reasoning about future states using the extracted rule-based knowledge from random episodes during the learning phase. The developed agent outperforms previous agents by a large margin, while still being limited in its prediction capabilities. The proposed forward model approximation opens a new class of solutions in the context of General Video Game Playing, which do not try to learn a value function, but try to increase their accuracy in modelling the game.

AB - This paper proposes a novel learning agent model for a General Video Game Playing agent. Our agent learns an approximation of the forward model from repeatedly playing a game and subsequently adapting its behavior to previously unseen levels. To achieve this, it first learns the game mechanics through machine learning techniques and then extracts rule-based symbolic knowledge on different levels of abstraction. When being confronted with new levels of a game, the agent is able to revise its knowledge by a novel belief revision approach. Using methods such as Monte Carlo Tree Search and Breadth First Search, it searches for the best possible action using simulated game episodes. Those simulations are only possible due to reasoning about future states using the extracted rule-based knowledge from random episodes during the learning phase. The developed agent outperforms previous agents by a large margin, while still being limited in its prediction capabilities. The proposed forward model approximation opens a new class of solutions in the context of General Video Game Playing, which do not try to learn a value function, but try to increase their accuracy in modelling the game.

KW - Belief Revision

KW - Breadth First Search

KW - Exception-tolerant Hierarchical Knowledge Bases

KW - Forward Model Approximation

KW - General Video Games

KW - Monte Carlo Tree Search

UR - http://www.scopus.com/inward/record.url?scp=85056837547&partnerID=8YFLogxK

U2 - 10.1109/CIG.2018.8490411

DO - 10.1109/CIG.2018.8490411

M3 - Conference contribution

AN - SCOPUS:85056837547

T3 - IEEE Conference on Computatonal Intelligence and Games, CIG

BT - Proceedings of the 2018 IEEE Conference on Computational Intelligence and Games, CIG 2018

PB - IEEE Computer Society

T2 - 14th IEEE Conference on Computational Intelligence and Games, CIG 2018

Y2 - 14 August 2018 through 17 August 2018

ER -

Von denselben Autoren