OAK

Deep ensemble learning of tactics to control the main force in a real-time strategy game

Metadata Downloads
Abstract
Professional StarCraft game players are likely to focus on the management of the most important group of units (called the main force) during gameplay. Although macro-level skills have been observed in human game replays, there has been little study of the high-level knowledge used for tactical decision-making, nor exploitation thereof to create AI modules. In this paper, we propose a novel tactical decision-making model that makes decisions to control the main force. We categorized the future movement direction of the main force into six classes (e.g., toward the enemy’s main base). The model learned to predict the next destination of the main force based on the large amount of experience represented in replays of human games. To obtain training data, we extracted information from 12,057 replay files produced by human players and obtained the position and movement direction of the main forces through a novel detection algorithm. We applied convolutional neural networks and a Vision Transformer to deal with the high-dimensional state representation and large state spaces. Furthermore, we analyzed human tactics relating to the main force. Model learning success rates of 88.5%, 76.8%, and 56.9% were achieved for the top-3, -2, and -1 accuracies, respectively. The results show that our method is capable of learning human macro-level intentions in real-time strategy games. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Author(s)
Han, IsaacKim, Kyung-Joong
Issued Date
2024-01
Type
Article
DOI
10.1007/s11042-023-15742-x
URI
https://scholar.gist.ac.kr/handle/local/9807
Publisher
Springer
Citation
Multimedia Tools and Applications, v.83, no.4, pp.12059 - 12087
ISSN
1380-7501
Appears in Collections:
Department of AI Convergence > 1. Journal Articles
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.