OAK

Reward Design using Large Language Model for Procedural Content Generation

Metadata Downloads
Author(s)
Jinha Noh
Type
Thesis
Degree
Master
Department
대학원 융합기술학제학부(문화기술프로그램)
Advisor
Kim, KyungJoong
Abstract
Driven by the rapid growth of machine learning, reinforcement learning has been employed as a method to solve problems in game artificial intelligence field. Notably, it has been extensively used in procedural content generation (PCG) to create game maps and achieve game balance. However, despite its significant influence on performance of reinforcement learning, the reward function is highly dependent on extensive knowledge of game environment and numerous internal variables, necessitating the involvement of experts. Therefore, this paper proposes a method to generate reward functions for procedural content generation using Large Language Models (LLMs). By combining prompt engineering method, such as Chain of Thought (CoT) with Procedural Content Generation via Reinforcement Learning (PCGRL), this approach aims to create reward functions that are challenging for humans to design and improve the performance of PCGRL. The results of this paper indicate that by using a reward function with prompt engineering, it is possible to maintain controllability of while ensuring the generation of diverse content. This study not only highlights the potential for improving accessibility in content generation but also aims to streamline the game AI development process.
URI
https://scholar.gist.ac.kr/handle/local/19645
Fulltext
http://gist.dcollection.net/common/orgView/200000878407
Alternative Author(s)
노진하
Appears in Collections:
Department of AI Convergence > 3. Theses(Master)
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.