The SYSU System for CCPR 2016 Multimodal Emotion Recognition Challenge.

Communications in Computer and Information Science(2016)

引用 2|浏览47
暂无评分
摘要
In this paper, we propose a multimodal emotion recognition system that combines the information from the facial, text and speech data. First, we propose a residual network architecture within the convolutional neural networks (CNN) framework to improve the facial expression recognition performance. We also perform video frames selection to fine tune our pre-trained model. Second, while the text emotion recognition conventionally deal with the clean perfect texts, here we adopt an automatic speech recognition (ASR) engine to transcribe the speech into text and then apply Support Vector Machine (SVM) on top of bag-of-words (BoW) features to predict the emotion labels. Third, we extract the openSMILE based utterance level feature and MFCC GMM based zero-order statistics feature for the subsequent SVM modeling in the speech based subsystem. Finally, score level fusion was used to combine the multimodal information. Experimental results were carried on the CCPR 2016 Multimodal Emotion Recognition Challenge database, our proposed multimodal system achieved 36% macro average precision on the test set which outperforms the baseline by 6% absolutely.
更多
查看译文
关键词
Multimodal emotion recognition,Residual network,Speech recognition,Text emotion recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要