OAK

Performance Evaluation Gaps in a Real-Time Strategy Game between Human and Artificial Intelligence Players.

Metadata Downloads
Abstract
Since 2010, annual StarCraft artificial intelligence (AI) competitions have promoted the development of successful AI players for complex real-time strategy games. In these competitions, AI players are ranked based on their win ratio over thousands of head-to-head matches. Although simple and easily implemented, this evaluation scheme may less adequately help develop more human-competitive AI players. In this paper, we recruited 45 human StarCraft players at different expertise levels (expert/medium/novice) and asked them to play against the 18 top AI players selected from the five years of competitions (2011-2015). The results show that the human evaluations of AI players differ substantially from the current standard evaluation and ranking method. In fact, from a human standpoint, there has been little progress in the quality of StarCraft AI players over the years. It is even possible that AI-only tournaments can lead to AIs being created that are unacceptable competitors for humans. This paper is the first to systematically explore the human evaluation of AI players, the evolution of AI players, and the differences between human perception and tournament-based evaluations. The discoveries from this paper can support AI developers in game companies and AI tournament organizers to better incorporate the perspective of human users into their AI systems.
Author(s)
Kim, Man-JeKim, Kyung-JoongKim, SeungjunDey, Anind K.
Issued Date
2018-01
Type
Article
DOI
10.1109/ACCESS.2018.2800016
URI
https://scholar.gist.ac.kr/handle/local/13430
Publisher
Institute of Electrical and Electronics Engineers Inc.
Citation
IEEE Access, v.6, pp.13575 - 13586
ISSN
2169-3536
Appears in Collections:
Department of AI Convergence > 1. Journal Articles
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.