Modality translation-based multimodal sentiment analysis under uncertain modalities

Zhizhong Liu, Bin Zhou,Dianhui Chu, Yuhang Sun, Lingqiang Meng

INFORMATION FUSION(2024)

引用 2|浏览26
暂无评分
摘要
Multimodal sentiment analysis (MSA) with uncertain missing modalities poses a new challenge in sentiment analysis. To address this problem, efficient MSA models that consider missing modalities have been proposed. However, existing studies have only adopted the concatenation operation for feature fusion while ignoring the deep interactions between different modalities. Moreover, existing studies have failed to take advantage of the text modality, which can achieve better accuracy in sentiment analysis. To tackle the above-mentioned issues, we propose a modality translation-based MSA model (MTMSA), which is robust to uncertain missing modalities. First, for multimodal data (text, visual, and audio) with uncertain missing data, the visual and audio are translated to the text modality with a modality translation module, and then the translated visual modality, translated audio, and encoded text are fused into missing joint features (MJFs). Next, the MJFs are encoded by the transformer encoder module under the supervision of a pre-trained model (transformer-based modality translation network, TMTN), thus making the transformer encoder module produce joint features of uncertain missing modalities approximating those of complete modalities. The encoded MJFs are input into the transformer decoder module to learn the long-term dependencies between different modalities. Finally, sentiment classification is performed based on the outputs of the transformer encoder module. Extensive experiments were conducted on two popular benchmark datasets (CMU-MOSI and IEMOCAP), with the experimental results demonstrating that MTMSA outperforms eight representative baseline models.
更多
查看译文
关键词
Multimodal sentiment analysis,Uncertain missing modalities,Modality translation,Transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要