Toward Robust Multimodal Learning using Multimodal Foundational Models
CoRR(2024)
摘要
Existing multimodal sentiment analysis tasks are highly rely on the
assumption that the training and test sets are complete multimodal data, while
this assumption can be difficult to hold: the multimodal data are often
incomplete in real-world scenarios. Therefore, a robust multimodal model in
scenarios with randomly missing modalities is highly preferred. Recently,
CLIP-based multimodal foundational models have demonstrated impressive
performance on numerous multimodal tasks by learning the aligned cross-modal
semantics of image and text pairs, but the multimodal foundational models are
also unable to directly address scenarios involving modality absence. To
alleviate this issue, we propose a simple and effective framework, namely TRML,
Toward Robust Multimodal Learning using Multimodal Foundational Models. TRML
employs generated virtual modalities to replace missing modalities, and aligns
the semantic spaces between the generated and missing modalities. Concretely,
we design a missing modality inference module to generate virtual modaliites
and replace missing modalities. We also design a semantic matching learning
module to align semantic spaces generated and missing modalities. Under the
prompt of complete modality, our model captures the semantics of missing
modalities by leveraging the aligned cross-modal semantic space. Experiments
demonstrate the superiority of our approach on three multimodal sentiment
analysis benchmark datasets, CMU-MOSI, CMU-MOSEI, and MELD.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要