Elastic Monte Carlo Tree Search

Research output: Contribution to journalArticleResearchpeer review

Authors

Research Organisations

External Research Organisations

  • Queen Mary University of London
View graph of relations

Details

Original languageEnglish
Pages (from-to)527-537
Number of pages11
JournalIEEE Transactions on Games
Volume15
Issue number4
Early online date2 Jun 2023
Publication statusPublished - Dec 2023

Abstract

Strategy games are a challenge for the design of AI agents due to their complexity and the combinatorial search space they produce. State abstraction has been applied in different domains to shrink the search space. Automatic state abstraction methods have gained much success in the planning domain and their transfer to strategy games raises a question of scalability. In this paper, we propose Elastic MCTS, an algorithm that uses automatic state abstraction to play strategy games. In Elastic MCTS, tree nodes are clustered dynamically. First, nodes are grouped by state abstraction for efficient exploration, to later be separated for refining exploitable action sequences. Such an elastic tree benefits from efficient information sharing while avoiding using an imperfect state abstraction during the whole search process. We provide empirical analyses of the proposed method in three strategy games of different complexity. Our empirical results show that in all games, Elastic MCTS outperforms MCTS baselines by a large margin, with a considerable search tree size reduction at the expense of small computation time. The code for reproducing the reported results can be found at <uri>https://github.com/GAIGResearch/Stratega</uri>.

Keywords

    Artificial intelligence, Buildings, Complexity theory, Decision making, Games, MCTS, Monte Carlo methods, Planning, State Abstraction, Strategy Games, state abstraction, Monte Carlo tree search (MCTS), strategy games

ASJC Scopus subject areas

Cite this

Elastic Monte Carlo Tree Search. / Xu, Linjie; Dockhorn, Alexander; Perez-Liebana, Diego.
In: IEEE Transactions on Games, Vol. 15, No. 4, 12.2023, p. 527-537.

Research output: Contribution to journalArticleResearchpeer review

Xu, L, Dockhorn, A & Perez-Liebana, D 2023, 'Elastic Monte Carlo Tree Search', IEEE Transactions on Games, vol. 15, no. 4, pp. 527-537. https://doi.org/10.1109/TG.2023.3282351
Xu, L., Dockhorn, A., & Perez-Liebana, D. (2023). Elastic Monte Carlo Tree Search. IEEE Transactions on Games, 15(4), 527-537. https://doi.org/10.1109/TG.2023.3282351
Xu L, Dockhorn A, Perez-Liebana D. Elastic Monte Carlo Tree Search. IEEE Transactions on Games. 2023 Dec;15(4):527-537. Epub 2023 Jun 2. doi: 10.1109/TG.2023.3282351
Xu, Linjie ; Dockhorn, Alexander ; Perez-Liebana, Diego. / Elastic Monte Carlo Tree Search. In: IEEE Transactions on Games. 2023 ; Vol. 15, No. 4. pp. 527-537.
Download
@article{dacd5216123b4ee4966752f8cfc313c9,
title = "Elastic Monte Carlo Tree Search",
abstract = "Strategy games are a challenge for the design of AI agents due to their complexity and the combinatorial search space they produce. State abstraction has been applied in different domains to shrink the search space. Automatic state abstraction methods have gained much success in the planning domain and their transfer to strategy games raises a question of scalability. In this paper, we propose Elastic MCTS, an algorithm that uses automatic state abstraction to play strategy games. In Elastic MCTS, tree nodes are clustered dynamically. First, nodes are grouped by state abstraction for efficient exploration, to later be separated for refining exploitable action sequences. Such an elastic tree benefits from efficient information sharing while avoiding using an imperfect state abstraction during the whole search process. We provide empirical analyses of the proposed method in three strategy games of different complexity. Our empirical results show that in all games, Elastic MCTS outperforms MCTS baselines by a large margin, with a considerable search tree size reduction at the expense of small computation time. The code for reproducing the reported results can be found at https://github.com/GAIGResearch/Stratega.",
keywords = "Artificial intelligence, Buildings, Complexity theory, Decision making, Games, MCTS, Monte Carlo methods, Planning, State Abstraction, Strategy Games, state abstraction, Monte Carlo tree search (MCTS), strategy games",
author = "Linjie Xu and Alexander Dockhorn and Diego Perez-Liebana",
note = "Funding Information: This work was supported by U.K. EPSRC under Grant EP/T008962/1.",
year = "2023",
month = dec,
doi = "10.1109/TG.2023.3282351",
language = "English",
volume = "15",
pages = "527--537",
number = "4",

}

Download

TY - JOUR

T1 - Elastic Monte Carlo Tree Search

AU - Xu, Linjie

AU - Dockhorn, Alexander

AU - Perez-Liebana, Diego

N1 - Funding Information: This work was supported by U.K. EPSRC under Grant EP/T008962/1.

PY - 2023/12

Y1 - 2023/12

N2 - Strategy games are a challenge for the design of AI agents due to their complexity and the combinatorial search space they produce. State abstraction has been applied in different domains to shrink the search space. Automatic state abstraction methods have gained much success in the planning domain and their transfer to strategy games raises a question of scalability. In this paper, we propose Elastic MCTS, an algorithm that uses automatic state abstraction to play strategy games. In Elastic MCTS, tree nodes are clustered dynamically. First, nodes are grouped by state abstraction for efficient exploration, to later be separated for refining exploitable action sequences. Such an elastic tree benefits from efficient information sharing while avoiding using an imperfect state abstraction during the whole search process. We provide empirical analyses of the proposed method in three strategy games of different complexity. Our empirical results show that in all games, Elastic MCTS outperforms MCTS baselines by a large margin, with a considerable search tree size reduction at the expense of small computation time. The code for reproducing the reported results can be found at https://github.com/GAIGResearch/Stratega.

AB - Strategy games are a challenge for the design of AI agents due to their complexity and the combinatorial search space they produce. State abstraction has been applied in different domains to shrink the search space. Automatic state abstraction methods have gained much success in the planning domain and their transfer to strategy games raises a question of scalability. In this paper, we propose Elastic MCTS, an algorithm that uses automatic state abstraction to play strategy games. In Elastic MCTS, tree nodes are clustered dynamically. First, nodes are grouped by state abstraction for efficient exploration, to later be separated for refining exploitable action sequences. Such an elastic tree benefits from efficient information sharing while avoiding using an imperfect state abstraction during the whole search process. We provide empirical analyses of the proposed method in three strategy games of different complexity. Our empirical results show that in all games, Elastic MCTS outperforms MCTS baselines by a large margin, with a considerable search tree size reduction at the expense of small computation time. The code for reproducing the reported results can be found at https://github.com/GAIGResearch/Stratega.

KW - Artificial intelligence

KW - Buildings

KW - Complexity theory

KW - Decision making

KW - Games

KW - MCTS

KW - Monte Carlo methods

KW - Planning

KW - State Abstraction

KW - Strategy Games

KW - state abstraction

KW - Monte Carlo tree search (MCTS)

KW - strategy games

UR - http://www.scopus.com/inward/record.url?scp=85161604855&partnerID=8YFLogxK

U2 - 10.1109/TG.2023.3282351

DO - 10.1109/TG.2023.3282351

M3 - Article

AN - SCOPUS:85161604855

VL - 15

SP - 527

EP - 537

JO - IEEE Transactions on Games

JF - IEEE Transactions on Games

SN - 2475-1502

IS - 4

ER -

By the same author(s)