Shuffling Augmented Decoupled Features for Multimodal Emotion Recognition
Citations

WEB OF SCIENCE

1
Citations

SCOPUS

1

초록

Multimodal emotion recognition (MER) aims to identify human emotions using data from multiple modalities. Despite promising advances in previous MER methods, their performance remains limited due to the small size of available datasets, a result of the challenges in collecting multimodal data. While data augmentation can address this issue, generating augmented multimodal data without altering the underlying emotional meaning remains particularly challenging. To tackle this problem, we introduce a decoupled feature augmentation method that automatically learns multimodal feature variations in a decoupled feature space for MER. Specifically, we decompose multimodal features into modality-invariant and modality-specific components and then augment each component within the decoupled feature space across multiple modalities. Unlike existing unimodal augmentation approaches, our method preserves cross-modal semantic consistency by jointly augmenting the decoupled components. To enhance model generalization and stability, we propose a learning strategy that gradually incorporates more diverse information by using a combined set of original and augmented decoupled features. Comprehensive experiments on two MER benchmarks demonstrate that our method outperforms or is comparable to several baseline methods.

키워드

Data augmentationEmotion recognitionTrainingSemanticsAcousticsRepresentation learningRedundancySpeech processingInterpolationCorrelationFeature augmentationmultimodal emotion recognitionmultimodal learning
제목
Shuffling Augmented Decoupled Features for Multimodal Emotion Recognition
저자
Cho, Sunyoung
DOI
10.1109/ACCESS.2025.3572925
발행일
2025-05
유형
Article
저널명
IEEE Access
13
페이지
91290 ~ 91300