ER-MRL: Emotion Recognition based on Multimodal Representation Learning

Xiaoding Guo,Yadi Wang, Zhijun Miao, Xiaojin Yang, Jinkai Guo, Xianhong Hou,Feifei Zao

2022 12th International Conference on Information Science and Technology (ICIST)(2022)

引用 0|浏览4
暂无评分
摘要
In recent years, emotion recognition technology has been widely used in emotion change perception and mental illness diagnosis. Previous methods are mainly based on single-task learning strategies, which are unable to fuse multimodal features and remove redundant information. This paper proposes an emotion recognition model ER-MRL, which is based on multimodal representation learning. ER-MRL vectorizes the multimodal emotion data through encoders based on neural networks. The gate mechanism is used for multimodal feature selection. On this basis, ER-MRL calculates the modality specific and modality invariant representation for each emotion category. The Transformer model and multihead self-attention layer are applied to multimodal feature fusion. ER-MRL figures out the prediction result through the tower layer based on fully connected neural networks. Experimental results on the CMU-MOSI dataset show that ER-MRL has better performance on emotion recognition than previous methods.
更多
查看译文
关键词
multimodal representation,feature fusion,gate mechanism,emotion recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要