A Novel Emotion-Aware Method Based on the Fusion of Textual Description of Speech, Body Movements, and Facial Expressions

IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT(2022)

引用 3|浏览23
暂无评分
摘要
Emotion computing is a necessary part of advanced human-computer interaction. An appropriate description of a character's facial expressions, body languages, and speaking styles in novels always enables readers to infer the character's emotions. Moreover, multimodal information is complementary and integrated. Fusing the information from multiple modes into a textual modal can get better fusion results and overcome the bias of understanding the unimodal information. Inspired by these facts, we develop a novel emotion-aware method by the fusion of textual description of speech, body movements, and facial expression, which reduces the dimensionality of speech, body movements, and facial expressions by unifying three types of information into a unified component. Specifically, to fuse multimodel features for emotion recognition, we propose a two-stage neural network. First, bidirectional long short-term memory-conditional random fields (Bi-LSTM-CRF) and back-propagation neural network (BPNN) are used to analyze the extracted vocal and visual features of facial expressions, body movements, and speeches, which aims to obtain textual descriptions of different features. Second, the textual descriptions of the features are fused through a neural network with a self-organization map (SOM) layer and are used to compensate layers that are trained by web-based corpus. The advantages of this method are to utilize depth information to track facial and bodily movement, and employ an explainable textual intermediate representation to fuse the features. We experimentally tested the emotion-aware system in real-world applications, and the results indicate that our system can quickly and steadily recognize human emotions. Compared with other unimodal and multimodal-fusion algorithms, our method is more precise, which can improve the accuracy by up to 30% compared with the unimodal method.
更多
查看译文
关键词
Emotion recognition, Feature extraction, Speech recognition, Neural networks, Fuses, Physiology, Face recognition, Body movements, facial expressions, multimodal emotion recognition, psychological problem, text-level feature fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要