OAK

Efficient Multi-Agent Reinforcement Learning for Many Agents

Metadata Downloads
Abstract
Recently, multi-agent research systems have been used in the field of reinforcement learning to manage with cooperative agents. Simultaneously managing a large number of agents is challenging, so various approaches are being considered. Specifically, the application of high-dimension data methodologies is a significant challenge in the research of manyagent problems, as complexity increases exponentially with the number of agents. Furthermore, policy convergence can be difficult as the contribution of each agent is unclear. In this work, we flexibly decomposed a multi-agent problem into sub-multi-agent tasks using a clustering method, and applied this technique to a hierarchical structure. After abstracting the movements of units through hierarchical approach, a group’s action space and micro-control tasks were mapped onto highand low-level actions, respectively. We demonstrated our method through combat scenarios in the StarCraft video game. Our method successfully decomposed a complex multi-agent problem into homogeneous sub-tasks, and showed the advantage of making the training process efficient and inexpensive.
Author(s)
Baek, In-ChangKim, KyungJoong
Issued Date
2019-10-08
Type
Conference Paper
URI
https://scholar.gist.ac.kr/handle/local/22907
Publisher
AAAI
Citation
AIIDE-19 Workshop on Artificial Intelligence for Strategy Games
Conference Place
US
Appears in Collections:
Department of AI Convergence > 2. Conference Papers
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.