OAK

A Neuro-Evolutionary Program Synthesis for Compositional Reasoning

Metadata Downloads
Author(s)
Woletemaryam Liyew Abitew
Type
Thesis
Degree
Master
Department
정보컴퓨팅대학 AI융합학과
Advisor
Kim, Sundong
Abstract
This thesis pursues the insight that evolutionary learning algorithms, while inter- pretable and powerful, often become trapped in local optima when synthesizing pro- grams. A natural question follows: Can this limitation be mitigated by leveraging the code-editing capabilities of large language models (LLMs)? Because LLMs are trained on sequential modifications and human-like refinements of code, they may provide a novel mechanism for helping evolutionary search escape local optima and they can approximate likely changes that humans would make. To investigate, Genetic Pro- gramming (GP) is first evaluated on the Abstraction and Reasoning Corpus (ARC) and ConceptARC benchmarks, where it synthesizes correct programs for 65 tasks, competitive with symbolic baselines yet prone to premature convergence. In the main experiment, a frozen LLM is introduced as a repair operator: rather than generating entire solutions, it modifies promising GP candidates using fitness feedback and train- ing task input–output examples. This hybrid process improves results, with an average fitness gain of ≈ 6.5% and broader coverage of reasoning concepts such as counting, movement, and shape recognition. This thesis points to two promising directions for future research. First, although dataset expansion was not investigated here, the pro- grams evolved throughout the search process naturally yield paired artifacts (program, input, and output). Such artifacts could be used to construct extended training sets, which may in turn support fine-tuning of future code editing models. Second, future work could explore seeding LLM-edited programs back into the evolutionary process, thus relaxing the dependence on human-defined primitives and moving toward more open-ended forms of neuro-evolution. Together, these directions highlight how the in- terplay between GP and LLMs can extend beyond mitigating local optima to enabling advances in program synthesis and open-ended discovery. ©2026 Woletemaryam Liyew Abitew ALL RIGHTS RESERVED
URI
https://scholar.gist.ac.kr/handle/local/33700
Fulltext
http://gist.dcollection.net/common/orgView/200000948582
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.