OAK

ConnecToMind: Connectome-Aware fMRI Decoding forVisual Image Reconstruction

Metadata Downloads
Author(s)
Bae, GunwooKim, YeonwooKim, Mansu
Type
Conference Paper
Citation
International Workshop on Machine Learning in Medical Imaging, pp.400 - 410
Issued Date
2025-09-27
Abstract
Recent deep-learning approaches have achieved significant improvements in reconstructing visual images from human brain activity. However, existing methods typically represent brain activity as flattened voxel-wise signals, overlooking the detailed anatomical and functional organization of visual cortical regions. Here, we propose ConnecToMind, a novel decoding framework that employs a region-level fMRI embedding module to preserve distinct functional representations across visual cortical sub-regions, while leveraging functional connectivity (FC) derived from resting-state fMRI. Experiments on the Natural Scenes Dataset (NSD) demonstrate that ConnecToMind outperforms the MindEye in both the semantic and perceptual fidelity of reconstructed images, validating the effectiveness of preserving distinct functional representations with FC prior. Moreover, ConnecToMind shows competitive performance in image retrieval tasks. Ablation analyses further reveal that low-level (e.g., V1–V3) and high-level (e.g., Lateral Occipital, Fusiform) visual regions distinctly contribute to the reconstruction quality, highlighting the importance of region-specific embeddings in visual reconstruction. All codes for this study are publicly available at GitHub (https://github.com/aimed-gist/ConneToMind).
Publisher
Springer Nature Switzerland
Conference Place
KO
Daejeon
URI
https://scholar.gist.ac.kr/handle/local/33460
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.