Multi-Head Attention For Speech Emotion Recognition With Auxiliary Learning Of Gender Recognition

2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING(2020)

引用 77|浏览21
暂无评分
摘要
The paper presents a Multi-Head Attention deep learning network for Speech Emotion Recognition (SER) using Log mel-Filter Bank Energies (LFBE) spectral features as the input. The multi-head attention along with the position embedding jointly attends to information from different representations of the same LFBE input sequence. The position embedding helps in attending to the dominant emotion features by identifying positions of the features in the sequence. In addition to Multi-Head Attention and position embedding, we apply multi-task learning with gender recognition as an auxiliary task. The auxiliary task helps in learning the gender specific features that influence the emotion characteristics in speech and results in improved accuracy of Speech Emotion Recognition, the primary task. We conducted all our experiments on IEMOCAP dataset. We are able to achieve an overall accuracy of 76.4% and average class accuracy of 70.1%, which are 5.3% and 6.2% higher respectively than the state-of-the-art models available on SER for four emotion classes.
更多
查看译文
关键词
Speech emotion recognition, Multi-Head Attention, multi-task learning, position embedding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要