Exploring Multimodal Multiscale Features for Sentiment Analysis Using Fuzzy-Deep Neural Network Learning
  • Wang, Xin
  • Lyu, Jianhui
  • Kim, Byung-Gyu
  • Parameshachari, B.D.
  • Li, Keqin
  • 외 1명
Citations

WEB OF SCIENCE

21
Citations

SCOPUS

25

초록

Sentiment analysis, a challenging task in understanding human emotions expressed through diverse modalities, prompts the development of innovative solutions. Multimodal data often contains important complementary information. Effective fusion and extraction of multimodal data features are key issues in sentiment analysis. In this paper, we introduce a novel sentiment analysis model that integrates multimodal multiscale features based on a fuzzy-deep neural network. First, we combine multimodal data, namely text, audio, and images, to extract intrinsic feature representations. Second, our model incorporates the fuzzy-deep neural network learning module, infused with fuzzy logic principles to enhance adaptability to the inherent vagueness in sentiment expressions. Furthermore, we integrate the dual attention mechanism that dynamically focuses on pivotal aspects within multimodal data, refining feature extraction for heightened context-awareness. Rigorous validation across three datasets, including the Multimodal Corpus of Sentiment Intensity dataset, the Multimodal Opinion Sentiment and Emotion Intensity dataset, and the Chinese Single and Multimodal Sentiment dataset, demonstrates the model's superior performance in capturing the intricacies of human emotions. IEEE

키워드

Adaptation modelsAnalytical modelsData modelsFeature extractionFuzzy logicFuzzy-deep neural networkmultimodal datamultiscale featuresentiment analysisSentiment analysisUncertainty
제목
Exploring Multimodal Multiscale Features for Sentiment Analysis Using Fuzzy-Deep Neural Network Learning
저자
Wang, XinLyu, JianhuiKim, Byung-GyuParameshachari, B.D.Li, KeqinLi, Qing
DOI
10.1109/TFUZZ.2024.3419140
발행일
2025-01
유형
Article
저널명
IEEE Transactions on Fuzzy Systems
33
1
페이지
28 ~ 42