Details
Original language | English |
---|---|
Pages (from-to) | 527-537 |
Number of pages | 11 |
Journal | IEEE Transactions on Games |
Volume | 15 |
Issue number | 4 |
Early online date | 2 Jun 2023 |
Publication status | Published - Dec 2023 |
Abstract
Strategy games are a challenge for the design of AI agents due to their complexity and the combinatorial search space they produce. State abstraction has been applied in different domains to shrink the search space. Automatic state abstraction methods have gained much success in the planning domain and their transfer to strategy games raises a question of scalability. In this paper, we propose Elastic MCTS, an algorithm that uses automatic state abstraction to play strategy games. In Elastic MCTS, tree nodes are clustered dynamically. First, nodes are grouped by state abstraction for efficient exploration, to later be separated for refining exploitable action sequences. Such an elastic tree benefits from efficient information sharing while avoiding using an imperfect state abstraction during the whole search process. We provide empirical analyses of the proposed method in three strategy games of different complexity. Our empirical results show that in all games, Elastic MCTS outperforms MCTS baselines by a large margin, with a considerable search tree size reduction at the expense of small computation time. The code for reproducing the reported results can be found at <uri>https://github.com/GAIGResearch/Stratega</uri>.
Keywords
- Artificial intelligence, Buildings, Complexity theory, Decision making, Games, MCTS, Monte Carlo methods, Planning, State Abstraction, Strategy Games, state abstraction, Monte Carlo tree search (MCTS), strategy games
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Computer Science(all)
- Artificial Intelligence
- Engineering(all)
- Electrical and Electronic Engineering
- Engineering(all)
- Control and Systems Engineering
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: IEEE Transactions on Games, Vol. 15, No. 4, 12.2023, p. 527-537.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Elastic Monte Carlo Tree Search
AU - Xu, Linjie
AU - Dockhorn, Alexander
AU - Perez-Liebana, Diego
N1 - Funding Information: This work was supported by U.K. EPSRC under Grant EP/T008962/1.
PY - 2023/12
Y1 - 2023/12
N2 - Strategy games are a challenge for the design of AI agents due to their complexity and the combinatorial search space they produce. State abstraction has been applied in different domains to shrink the search space. Automatic state abstraction methods have gained much success in the planning domain and their transfer to strategy games raises a question of scalability. In this paper, we propose Elastic MCTS, an algorithm that uses automatic state abstraction to play strategy games. In Elastic MCTS, tree nodes are clustered dynamically. First, nodes are grouped by state abstraction for efficient exploration, to later be separated for refining exploitable action sequences. Such an elastic tree benefits from efficient information sharing while avoiding using an imperfect state abstraction during the whole search process. We provide empirical analyses of the proposed method in three strategy games of different complexity. Our empirical results show that in all games, Elastic MCTS outperforms MCTS baselines by a large margin, with a considerable search tree size reduction at the expense of small computation time. The code for reproducing the reported results can be found at https://github.com/GAIGResearch/Stratega.
AB - Strategy games are a challenge for the design of AI agents due to their complexity and the combinatorial search space they produce. State abstraction has been applied in different domains to shrink the search space. Automatic state abstraction methods have gained much success in the planning domain and their transfer to strategy games raises a question of scalability. In this paper, we propose Elastic MCTS, an algorithm that uses automatic state abstraction to play strategy games. In Elastic MCTS, tree nodes are clustered dynamically. First, nodes are grouped by state abstraction for efficient exploration, to later be separated for refining exploitable action sequences. Such an elastic tree benefits from efficient information sharing while avoiding using an imperfect state abstraction during the whole search process. We provide empirical analyses of the proposed method in three strategy games of different complexity. Our empirical results show that in all games, Elastic MCTS outperforms MCTS baselines by a large margin, with a considerable search tree size reduction at the expense of small computation time. The code for reproducing the reported results can be found at https://github.com/GAIGResearch/Stratega.
KW - Artificial intelligence
KW - Buildings
KW - Complexity theory
KW - Decision making
KW - Games
KW - MCTS
KW - Monte Carlo methods
KW - Planning
KW - State Abstraction
KW - Strategy Games
KW - state abstraction
KW - Monte Carlo tree search (MCTS)
KW - strategy games
UR - http://www.scopus.com/inward/record.url?scp=85161604855&partnerID=8YFLogxK
U2 - 10.1109/TG.2023.3282351
DO - 10.1109/TG.2023.3282351
M3 - Article
AN - SCOPUS:85161604855
VL - 15
SP - 527
EP - 537
JO - IEEE Transactions on Games
JF - IEEE Transactions on Games
SN - 2475-1502
IS - 4
ER -