OAK

Human-Like Procedural Level Generation via Reinforcement Learning with Contrastive Language-State Embedding

Metadata Downloads
Author(s)
Lee Seoyoung
Type
Thesis
Degree
Master
Department
대학원 AI대학원
Advisor
Kim, KyungJoong
Abstract
This paper proposes HL-PCGRL (Human-Like Procedural Content Generation via Reinforcement Learning), a reinforcement learning framework for generating 2D game maps in human-like styles conditioned on natural language instructions. While existing methods effectively satisfy quantitative conditions, they often fail to capture the structural styles of human designers. HL-PCGRL addresses this limitation by introducing a contrastive encoder that aligns natural language and map states within a shared embedding space. This encoder is utilized for both policy input and human-similarity reward computation. Experimental results demonstrate that HL-PCGRL improves the Human-likeness metric by an average of 8.24% over existing methods, while maintaining comparable task performance. Additionally, the trade-off between condition satisfaction and stylistic similarity is shown to be controllable. This work presents a novel approach for integrating human-centered design constraints into reinforcement learning-based procedural content generation.
URI
https://scholar.gist.ac.kr/handle/local/31902
Fulltext
http://gist.dcollection.net/common/orgView/200000898184
Alternative Author(s)
이서영
Appears in Collections:
Department of AI Convergence > 3. Theses(Master)
Authorize & License
  • Authorize공개
Files in This Item:
  • There are no files associated with this item.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.