OAK

Domain Aware Multi-task Pretraining of3D Swin Transformer forT1-Weighted Brain MRI

Metadata Downloads
Abstract
The scarcity of annotated medical images is a major bottleneck in developing learning models for medical image analysis. Hence, recent studies have focused on pretrained models with fewer annotation requirements that can be fine-tuned for various downstream tasks. However, existing approaches are mainly 3D adaptions of 2D approaches ill-suited for 3D medical imaging data. Motivated by this gap, we propose novel domain-aware multi-task learning tasks to pretrain a 3D Swin Transformer for brain magnetic resonance imaging (MRI). Our method considers the domain knowledge in brain MRI by incorporating brain anatomy and morphology as well as standard pretext tasks adapted for 3D imaging in a contrastive learning setting. We pretrain our model using large-scale brain MRI data of 13,687 samples spanning several large-scale databases. Our method outperforms existing supervised and self-supervised methods in three downstream tasks of Alzheimer’s disease classification, Parkinson’s disease classification, and age prediction tasks. The ablation study of the proposed pretext tasks shows the effectiveness of our pretext tasks. Our code is available at github.com/jongdory/DAMT.
Author(s)
Kim, JonghunKim, MansuPark, Hyunjin
Issued Date
2024-12-08
Type
Conference Paper
DOI
10.1007/978-981-96-0901-7_8
URI
https://scholar.gist.ac.kr/handle/local/8078
Publisher
Springer Nature Singapore
Citation
Asian Conference on Computer Vision, pp.121 - 141
Conference Place
VN
Hanoi
Appears in Collections:
Department of AI Convergence > 2. Conference Papers
공개 및 라이선스
  • 공개 구분공개
파일 목록

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.