OAK

IPCGRL: Language-Instructed Reinforcement Learning for Procedural Level Generation

Metadata Downloads
Author(s)
Baek, In-changKim, Sung-hyunLee, Seo-youngKim, Dong-hyeonKim, Kyungjoong
Type
Conference Paper
Citation
2025 IEEE Conference on Games, CoG 2025
Issued Date
2025-08-26
Abstract
Recent research has highlighted the significance of natural language in enhancing the controllability of generative models. While various efforts have been made to leverage natural language for content generation, research on deep reinforcement learning (DRL) agents utilizing text-based instructions for procedural content generation remains limited. In this paper, we propose IPCGRL, an instruction-based procedural content generation method via reinforcement learning, which incorporates a sentence embedding model. IPCGRL fine-tunes task-specific embedding representations to effectively compress game-level conditions. We evaluate IPCGRL in a two-dimensional level generation task and compare its performance with a general-purpose embedding method. The results indicate that IPCGRL achieves up to a 21.4 % improvement in controllability and a 17.2 % improvement in generalizability for unseen instructions with varied condition expressions within the same task. Furthermore, the proposed method extends the modality of conditional input, enabling a more flexible and expressive interaction framework for procedural content generation. © 2025 IEEE.
Publisher
IEEE Computer Society
Conference Place
PO
Lisbon
URI
https://scholar.gist.ac.kr/handle/local/32379
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.