Spellcaster Control Agent in StarCraft II Using Deep Reinforcement Learning
- Abstract
- This paper proposes a DRL-based training method for spellcaster units in StarCraft II, one of the most representative Real-Time Strategy (RTS) games. During combat situations in StarCraft II, micro-controlling various combat units is crucial in order to win the game. Among many other combat units, the spellcaster unit is one of the most significant components that greatly influences the combat results. Despite the importance of the spellcaster units in combat, training methods to carefully control spellcasters have not been thoroughly considered in related studies due to the complexity. Therefore, we suggest a training method for spellcaster units in StarCraft II by using the A3C algorithm. The main idea is to train two Protoss spellcaster units under three newly designed minigames, each representing a unique spell usage scenario, to use 'Force Field' and 'Psionic Storm' effectively. As a result, the trained agents show winning rates of more than 85% in each scenario. We present a new training method for spellcaster units that releases the limitation of StarCraft II AI research. We expect that our training method can be used for training other advanced and tactical units by applying transfer learning in more complex minigame scenarios or full game maps.
- Author(s)
- Song, Wooseok; Suh, Woong Hyun; Ahn, Chang Wook
- Issued Date
- 2020-06
- Type
- Article
- DOI
- 10.3390/electronics9060996
- URI
- https://scholar.gist.ac.kr/handle/local/12104
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.