Elastic Monte Carlo Tree Search

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

Externe Organisationen

  • Queen Mary University of London
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)527-537
Seitenumfang11
FachzeitschriftIEEE Transactions on Games
Jahrgang15
Ausgabenummer4
Frühes Online-Datum2 Juni 2023
PublikationsstatusVeröffentlicht - Dez. 2023

Abstract

Strategy games are a challenge for the design of AI agents due to their complexity and the combinatorial search space they produce. State abstraction has been applied in different domains to shrink the search space. Automatic state abstraction methods have gained much success in the planning domain and their transfer to strategy games raises a question of scalability. In this paper, we propose Elastic MCTS, an algorithm that uses automatic state abstraction to play strategy games. In Elastic MCTS, tree nodes are clustered dynamically. First, nodes are grouped by state abstraction for efficient exploration, to later be separated for refining exploitable action sequences. Such an elastic tree benefits from efficient information sharing while avoiding using an imperfect state abstraction during the whole search process. We provide empirical analyses of the proposed method in three strategy games of different complexity. Our empirical results show that in all games, Elastic MCTS outperforms MCTS baselines by a large margin, with a considerable search tree size reduction at the expense of small computation time. The code for reproducing the reported results can be found at <uri>https://github.com/GAIGResearch/Stratega</uri>.

ASJC Scopus Sachgebiete

Zitieren

Elastic Monte Carlo Tree Search. / Xu, Linjie; Dockhorn, Alexander; Perez-Liebana, Diego.
in: IEEE Transactions on Games, Jahrgang 15, Nr. 4, 12.2023, S. 527-537.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Xu, L, Dockhorn, A & Perez-Liebana, D 2023, 'Elastic Monte Carlo Tree Search', IEEE Transactions on Games, Jg. 15, Nr. 4, S. 527-537. https://doi.org/10.1109/TG.2023.3282351
Xu, L., Dockhorn, A., & Perez-Liebana, D. (2023). Elastic Monte Carlo Tree Search. IEEE Transactions on Games, 15(4), 527-537. https://doi.org/10.1109/TG.2023.3282351
Xu L, Dockhorn A, Perez-Liebana D. Elastic Monte Carlo Tree Search. IEEE Transactions on Games. 2023 Dez;15(4):527-537. Epub 2023 Jun 2. doi: 10.1109/TG.2023.3282351
Xu, Linjie ; Dockhorn, Alexander ; Perez-Liebana, Diego. / Elastic Monte Carlo Tree Search. in: IEEE Transactions on Games. 2023 ; Jahrgang 15, Nr. 4. S. 527-537.
Download
@article{dacd5216123b4ee4966752f8cfc313c9,
title = "Elastic Monte Carlo Tree Search",
abstract = "Strategy games are a challenge for the design of AI agents due to their complexity and the combinatorial search space they produce. State abstraction has been applied in different domains to shrink the search space. Automatic state abstraction methods have gained much success in the planning domain and their transfer to strategy games raises a question of scalability. In this paper, we propose Elastic MCTS, an algorithm that uses automatic state abstraction to play strategy games. In Elastic MCTS, tree nodes are clustered dynamically. First, nodes are grouped by state abstraction for efficient exploration, to later be separated for refining exploitable action sequences. Such an elastic tree benefits from efficient information sharing while avoiding using an imperfect state abstraction during the whole search process. We provide empirical analyses of the proposed method in three strategy games of different complexity. Our empirical results show that in all games, Elastic MCTS outperforms MCTS baselines by a large margin, with a considerable search tree size reduction at the expense of small computation time. The code for reproducing the reported results can be found at https://github.com/GAIGResearch/Stratega.",
keywords = "Artificial intelligence, Buildings, Complexity theory, Decision making, Games, MCTS, Monte Carlo methods, Planning, State Abstraction, Strategy Games, state abstraction, Monte Carlo tree search (MCTS), strategy games",
author = "Linjie Xu and Alexander Dockhorn and Diego Perez-Liebana",
note = "Funding Information: This work was supported by U.K. EPSRC under Grant EP/T008962/1.",
year = "2023",
month = dec,
doi = "10.1109/TG.2023.3282351",
language = "English",
volume = "15",
pages = "527--537",
number = "4",

}

Download

TY - JOUR

T1 - Elastic Monte Carlo Tree Search

AU - Xu, Linjie

AU - Dockhorn, Alexander

AU - Perez-Liebana, Diego

N1 - Funding Information: This work was supported by U.K. EPSRC under Grant EP/T008962/1.

PY - 2023/12

Y1 - 2023/12

N2 - Strategy games are a challenge for the design of AI agents due to their complexity and the combinatorial search space they produce. State abstraction has been applied in different domains to shrink the search space. Automatic state abstraction methods have gained much success in the planning domain and their transfer to strategy games raises a question of scalability. In this paper, we propose Elastic MCTS, an algorithm that uses automatic state abstraction to play strategy games. In Elastic MCTS, tree nodes are clustered dynamically. First, nodes are grouped by state abstraction for efficient exploration, to later be separated for refining exploitable action sequences. Such an elastic tree benefits from efficient information sharing while avoiding using an imperfect state abstraction during the whole search process. We provide empirical analyses of the proposed method in three strategy games of different complexity. Our empirical results show that in all games, Elastic MCTS outperforms MCTS baselines by a large margin, with a considerable search tree size reduction at the expense of small computation time. The code for reproducing the reported results can be found at https://github.com/GAIGResearch/Stratega.

AB - Strategy games are a challenge for the design of AI agents due to their complexity and the combinatorial search space they produce. State abstraction has been applied in different domains to shrink the search space. Automatic state abstraction methods have gained much success in the planning domain and their transfer to strategy games raises a question of scalability. In this paper, we propose Elastic MCTS, an algorithm that uses automatic state abstraction to play strategy games. In Elastic MCTS, tree nodes are clustered dynamically. First, nodes are grouped by state abstraction for efficient exploration, to later be separated for refining exploitable action sequences. Such an elastic tree benefits from efficient information sharing while avoiding using an imperfect state abstraction during the whole search process. We provide empirical analyses of the proposed method in three strategy games of different complexity. Our empirical results show that in all games, Elastic MCTS outperforms MCTS baselines by a large margin, with a considerable search tree size reduction at the expense of small computation time. The code for reproducing the reported results can be found at https://github.com/GAIGResearch/Stratega.

KW - Artificial intelligence

KW - Buildings

KW - Complexity theory

KW - Decision making

KW - Games

KW - MCTS

KW - Monte Carlo methods

KW - Planning

KW - State Abstraction

KW - Strategy Games

KW - state abstraction

KW - Monte Carlo tree search (MCTS)

KW - strategy games

UR - http://www.scopus.com/inward/record.url?scp=85161604855&partnerID=8YFLogxK

U2 - 10.1109/TG.2023.3282351

DO - 10.1109/TG.2023.3282351

M3 - Article

AN - SCOPUS:85161604855

VL - 15

SP - 527

EP - 537

JO - IEEE Transactions on Games

JF - IEEE Transactions on Games

SN - 2475-1502

IS - 4

ER -

Von denselben Autoren