Reward Design using Large Language Model for Procedural Content Generation
- Author(s)
- Jinha Noh
- Type
- Thesis
- Degree
- Master
- Department
- 대학원 융합기술학제학부(문화기술프로그램)
- Advisor
- Kim, KyungJoong
- Abstract
- Driven by the rapid growth of machine learning, reinforcement learning has been employed as a method to solve problems in game artificial intelligence field. Notably, it has been extensively used in procedural content generation (PCG) to create game maps and achieve game balance. However, despite its significant influence on performance of reinforcement learning, the reward function is highly dependent on extensive knowledge of game environment and numerous internal variables, necessitating the involvement of experts. Therefore, this paper proposes a method to generate reward functions for procedural content generation using Large Language Models (LLMs). By combining prompt engineering method, such as Chain of Thought (CoT) with Procedural Content Generation via Reinforcement Learning (PCGRL), this approach aims to create reward functions that are challenging for humans to design and improve the performance of PCGRL. The results of this paper indicate that by using a reward function with prompt engineering, it is possible to maintain controllability of while ensuring the generation of diverse content. This study not only highlights the potential for improving accessibility in content generation but also aims to streamline the game AI development process.
- URI
- https://scholar.gist.ac.kr/handle/local/19645
- Fulltext
- http://gist.dcollection.net/common/orgView/200000878407
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.