Context-based Adaptive Multimodal Fusion Network for Continuous Frame-level Sentiment Prediction

IEEE/ACM transactions on audio, speech, and language processing(2023)

引用 0|浏览1
暂无评分
摘要
Recently, video sentiment computing has become the focus of research because of its benefits in many applications such as digital marketing, education, healthcare, and so on. The difficulty of video sentiment prediction mainly lies in the regression accuracy of long-term sequences and how to integrate different modalities. In particular, different modalities may express different emotions. In order to maintain the continuity of long time-series sentiments and mitigate the multimodal conflicts, this paper proposes a novel Context-Based Adaptive Multimodal Fusion Network (CAMFNet) for consecutive frame-level sentiment prediction. A Context-based Transformer (CBT) module was specifically designed to embed clip features into continuous frame features, leveraging its capability to enhance the consistency of prediction results. Moreover, to resolve the multi-modal conflict between modalities, this paper proposed an Adaptive multimodal fusion (AMF) method based on the self-attention mechanism. It can dynamically determines the degree of shared semantics across modalities, enabling the model to flexibly adapt its fusion strategy. Through adaptive fusion of multimodal features, the AMF method effectively resolves potential conflicts arising from diverse modalities, ultimately enhancing the overall performance of the model. The proposed CAMFNet for consecutive frame-level sentiment prediction can ensure the continuity of long time-series sentiments. Extensive experiments illustrate the superiority of the proposed method especially in multimodal conflicts videos.
更多
查看译文
关键词
adaptive multimodal fusion network,context-based,frame-level
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要