OAK

Discrete Prompt Compression with Reinforcement Learning

Metadata Downloads
Author(s)
Kyung-Joong Kim
Type
Thesis
Degree
Master
Department
대학원 융합기술학제학부(문화기술프로그램)
Advisor
Kim, KyungJoong
Abstract
Compressed prompts aid Instruction-tuned Language Models (LMs) in overcoming context window limitations and lowering computational costs. Existing methods, mostly based on training embeddings, face challenges in terms of interpretability, a fixed number of embedding tokens, reusability across different LMs, and inapplicability when interacting with black-box APIs. This study proposes prompt compression with reinforcement learning (PCRL), a novel discrete prompt compression method that addresses these issues. PCRL employs a computationally efficient policy network that directly edits prompts. The PCRL training approach can be flexibly applied to various types of LMs, as well as decoder-only and encoder-decoder architecture, and can be trained without gradient access to LMs or labeled data. PCRL achieves an average reduction of 24.6% in token count across various instruction prompts while preserving performance. Further, we demonstrate that the learned policy can be transferred to larger LMs, and through various analyses, we aid the understanding of token importance within prompts.
URI
https://scholar.gist.ac.kr/handle/local/19194
Fulltext
http://gist.dcollection.net/common/orgView/200000880216
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.