Predicting combat outcomes and optimizing armies in StarCraft II by deep learning
- Abstract
- Real-time strategy (RTS) games' nature that, more complex than the turn-based, tabletop games such as Go, has been spotlighted in the field of artificial intelligence (AI) due to its similarity with real-world problems. In StarCraft II, agents cannot make decisions and control until they evaluate and compare the expected outcome of a choice. Among the ways to evaluate outcomes, combat models are one of the active areas of research to this problem, which is a basis for decision-making. The battlefield of combat needs to be considered in combat models because they have enough influence to overturn the outcome of the battle. However, its effect has not been sufficiently examined. We introduce a combat winner predictor that utilizes battlefield and troop information. Furthermore, we propose a constrained optimization framework with gradient updates to optimize unit-combinations based on the combat winner predictor. Experiments demonstrate the robustness and rapidness of the proposed methods in large-scale combat datasets on various battlefields of StarCraft II. The proposed framework achieved better accuracy in prediction and retrieved winning unit-combinations faster. Incorporating these frameworks into AI agents can improve the AI's decision-making power.
- Author(s)
- Lee, Donghyeon; Kim, Man-Je; Ahn, Chang Wook
- Issued Date
- 2021-12
- Type
- Article
- DOI
- 10.1016/j.eswa.2021.115592
- URI
- https://scholar.gist.ac.kr/handle/local/11152
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.