OAK

Multimodal Emotion Recognition Using Modality-Wise Knowledge Distillation

Metadata Downloads
Author(s)
Lee, SeonggyuAhn, YoungdoShin, Jong-won
Type
Article
Citation
Sensors, v.25, no.20
Issued Date
2025-10
Abstract
Multimodal emotion recognition (MER) aims to estimate emotional states utilizing multiple sensors simultaneously. Most previous MER models extract unimodal representation via modality-wise encoders and combine them into a multimodal representation to classify the emotion, and these models are trained with an objective for the final output of the MER. If an encoder for a specific modality is optimized better than others at some point of the training procedure, the parameters for the other encoders may not be sufficiently updated to provide optimal performance. In this paper, we propose a MER using modality-wise knowledge distillation, which adapts the unimodal encoders using pre-trained unimodal emotion recognition models. Experimental results on CREMA-D and IEMOCAP databases demonstrated that the proposed method outperformed previous approaches to overcome the optimization imbalance phenomenon and could also be combined with these approaches effectively. © 2025 Elsevier B.V., All rights reserved.
Publisher
Multidisciplinary Digital Publishing Institute (MDPI)
DOI
10.3390/s25206341
URI
https://scholar.gist.ac.kr/handle/local/32303
공개 및 라이선스
  • 공개 구분공개
파일 목록
  • 관련 파일이 존재하지 않습니다.

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.