Details
Original language | English |
---|---|
Title of host publication | IEEE Conference on Games, CoG 2020 |
Publisher | IEEE Computer Society |
Pages | 716-723 |
Number of pages | 8 |
ISBN (electronic) | 9781728145334 |
Publication status | Published - Aug 2020 |
Externally published | Yes |
Event | 2020 IEEE Conference on Games, CoG 2020 - Virtual, Osaka, Japan Duration: 24 Aug 2020 → 27 Aug 2020 |
Publication series
Name | IEEE Conference on Computatonal Intelligence and Games, CIG |
---|---|
Volume | 2020-August |
ISSN (Print) | 2325-4270 |
ISSN (electronic) | 2325-4289 |
Abstract
In this paper, we are going to explain the design process for our GVGAI game-learning agent, which is going to be submitted to the GVGAI competition's learning track 2020. The agent relies on a local forward modeling approach, which uses predictions of future game-states to allow the application of simulation-based search algorithms. We first explain our process in identifying repeating tiles throughout a pixel-based state observation. Using the tile information, a local forward model is trained to predict the future state of each tile based on its current state and its surrounding tiles. We accompany this approach with a simple reward model, which determines the expected reward of a predicted state transition. The proposed approach has been tested using multiple games of the GVGAI framework. Results show that the approach seems to be especially feasible for learning how to play deterministic games. Except for one non-deterministic game, the agent performance is very similar to agents using the true forward model. Nevertheless, the prediction accuracy needs to be further improved to facilitate a better game-playing performance.
Keywords
- General Game Learning, GVGAI framework, Local Forward Model, Rolling Horizon Evolutionary Algorithm
ASJC Scopus subject areas
- Computer Science(all)
- Artificial Intelligence
- Computer Science(all)
- Computer Graphics and Computer-Aided Design
- Computer Science(all)
- Computer Vision and Pattern Recognition
- Computer Science(all)
- Human-Computer Interaction
- Computer Science(all)
- Software
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
IEEE Conference on Games, CoG 2020. IEEE Computer Society, 2020. p. 716-723 9231793 (IEEE Conference on Computatonal Intelligence and Games, CIG; Vol. 2020-August).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Local Forward Model Learning for GVGAI Games
AU - Dockhorn, Alexander
AU - Lucas, Simon
PY - 2020/8
Y1 - 2020/8
N2 - In this paper, we are going to explain the design process for our GVGAI game-learning agent, which is going to be submitted to the GVGAI competition's learning track 2020. The agent relies on a local forward modeling approach, which uses predictions of future game-states to allow the application of simulation-based search algorithms. We first explain our process in identifying repeating tiles throughout a pixel-based state observation. Using the tile information, a local forward model is trained to predict the future state of each tile based on its current state and its surrounding tiles. We accompany this approach with a simple reward model, which determines the expected reward of a predicted state transition. The proposed approach has been tested using multiple games of the GVGAI framework. Results show that the approach seems to be especially feasible for learning how to play deterministic games. Except for one non-deterministic game, the agent performance is very similar to agents using the true forward model. Nevertheless, the prediction accuracy needs to be further improved to facilitate a better game-playing performance.
AB - In this paper, we are going to explain the design process for our GVGAI game-learning agent, which is going to be submitted to the GVGAI competition's learning track 2020. The agent relies on a local forward modeling approach, which uses predictions of future game-states to allow the application of simulation-based search algorithms. We first explain our process in identifying repeating tiles throughout a pixel-based state observation. Using the tile information, a local forward model is trained to predict the future state of each tile based on its current state and its surrounding tiles. We accompany this approach with a simple reward model, which determines the expected reward of a predicted state transition. The proposed approach has been tested using multiple games of the GVGAI framework. Results show that the approach seems to be especially feasible for learning how to play deterministic games. Except for one non-deterministic game, the agent performance is very similar to agents using the true forward model. Nevertheless, the prediction accuracy needs to be further improved to facilitate a better game-playing performance.
KW - General Game Learning
KW - GVGAI framework
KW - Local Forward Model
KW - Rolling Horizon Evolutionary Algorithm
UR - http://www.scopus.com/inward/record.url?scp=85096908940&partnerID=8YFLogxK
U2 - 10.1109/CoG47356.2020.9231793
DO - 10.1109/CoG47356.2020.9231793
M3 - Conference contribution
AN - SCOPUS:85096908940
T3 - IEEE Conference on Computatonal Intelligence and Games, CIG
SP - 716
EP - 723
BT - IEEE Conference on Games, CoG 2020
PB - IEEE Computer Society
T2 - 2020 IEEE Conference on Games, CoG 2020
Y2 - 24 August 2020 through 27 August 2020
ER -