Human-Like Procedural Level Generation via Reinforcement Learning with Contrastive Language-State Embedding
- Author(s)
- Lee Seoyoung
- Type
- Thesis
- Degree
- Master
- Department
- 대학원 AI대학원
- Advisor
- Kim, KyungJoong
- Abstract
- This paper proposes HL-PCGRL (Human-Like Procedural Content Generation via Reinforcement Learning), a reinforcement learning framework for generating 2D game maps in human-like styles conditioned on natural language instructions. While existing methods effectively satisfy quantitative conditions, they often fail to capture the structural styles of human designers. HL-PCGRL addresses this limitation by introducing a contrastive encoder that aligns natural language and map states within a shared embedding space. This encoder is utilized for both policy input and human-similarity reward computation. Experimental results demonstrate that HL-PCGRL improves the Human-likeness metric by an average of 8.24% over existing methods, while maintaining comparable task performance. Additionally, the trade-off between condition satisfaction and stylistic similarity is shown to be controllable. This work presents a novel approach for integrating human-centered design constraints into reinforcement learning-based procedural content generation.
- URI
- https://scholar.gist.ac.kr/handle/local/31902
- Fulltext
- http://gist.dcollection.net/common/orgView/200000898184
- 공개 및 라이선스
-
- 파일 목록
-
Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.