OAK

Deep metric loss for multimodal learning

Metadata Downloads
Abstract
Multimodal learning often outperforms its unimodal counterparts by exploiting unimodal contributions and cross-modal interactions. However, focusing only on integrating multimodal features into a unified comprehensive representation overlooks the unimodal characteristics. In real data, the contributions of modalities can vary from instance to instance, and they often reinforce or conflict with each other. In this study, we introduce a novel MultiModal loss paradigm for multimodal learning, which subgroups instances according to their unimodal contributions. MultiModal loss can prevent inefficient learning caused by overfitting and efficiently optimize multimodal models. On synthetic data, MultiModal loss demonstrates improved classification performance by subgrouping difficult instances within certain modalities. On four real multimodal datasets, our loss is empirically shown to improve the performance of recent models. Ablation studies verify the effectiveness of our loss. Additionally, we show that our loss generates a reliable prediction score for each modality, which is essential for subgrouping. Our MultiModal loss is a novel loss function to subgroup instances according to the contribution of modalities in multimodal learning and is applicable to a variety of multimodal models with unimodal decisions. Our code is available at https://github.com/DMCB-GIST/MultiModalLoss
Author(s)
Moon, SehwanLee, Hyunju
Issued Date
2025-01
Type
Article
DOI
10.1007/s10994-024-06709-6
URI
https://scholar.gist.ac.kr/handle/local/9090
Publisher
SPRINGER
Citation
MACHINE LEARNING, v.114, no.1
ISSN
0885-6125
Appears in Collections:
Department of AI Convergence > 1. Journal Articles
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.