Predicting Combat Outcomes and Optimizing Armies in StarCraft II by Deep Learning
- Author(s)
- Donghyeon Lee
- Type
- Thesis
- Degree
- Master
- Department
- 대학원 전기전자컴퓨터공학부
- Advisor
- Ahn, Chang Wook
- Abstract
- Real-time strategy (RTS) games' nature that, more complex than the turn-based, tabletop games such as Go, has been spotlighted in the field of artificial intelligence (AI) due to its similarity with real-world problems. In StarCraft II, agents cannot make decisions and control until they predict and compare the expected outcome of a choice. Among the ways to predict outcomes, combat models are one of the active areas of research to this problem. The combat model is a basis for making decisions such as deciding whether to advance or retreat and determining which units to train.The battlefield of combat needs to be considered in combat models because they have enough influence to overturn the outcome of the battle. However, its effect has not been sufficiently examined.
We introduce a combat winner predictor that utilizes battlefield and troop information. Furthermore, we propose a constrained optimization framework with gradient updates to optimize unit-combinations based on the combat winner predictor. Experiments demonstrate the robustness and rapidness of the proposed methods in large-scale combat datasets on various battlefields of StarCraft II. The proposed framework achieved better accuracy in prediction and retrieved winning unit-combinations faster. Incorporating these frameworks into AI agents can improve the AI's decision-making power.
- URI
- https://scholar.gist.ac.kr/handle/local/33231
- Fulltext
- http://gist.dcollection.net/common/orgView/200000907427
- Authorize & License
-
- Files in This Item:
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.