Multimodal Speech Emotion Recognition Using Modality-specific Self-Supervised Frameworks

CoRR(2023)

引用 0|浏览6
暂无评分
摘要
Emotion recognition is a topic of significant interest in assistive robotics due to the need to equip robots with the ability to comprehend human behavior, facilitating their effective interaction in our society. Consequently, efficient and dependable emotion recognition systems supporting optimal human-machine communication are required. Multi-modality (including speech, audio, text, images, and videos) is typically exploited in emotion recognition tasks. Much relevant research is based on merging multiple data modalities and training deep learning models utilizing low-level data representations. However, most existing emotion databases are not large (or complex) enough to allow machine learning approaches to learn detailed representations. This paper explores modalityspecific pre-trained transformer frameworks for self-supervised learning of speech and text representations for data-efficient emotion recognition while achieving state-of-the-art performance in recognizing emotions. This model applies feature-level fusion using nonverbal cue data points from motion capture to provide multimodal speech emotion recognition. The model was trained using the publicly available IEMOCAP dataset, achieving an overall accuracy of 77.58% for four emotions, outperforming state-of-the-art approaches
更多
查看译文
关键词
Emotion Recognition,Speech Emotion Recognition,Multimodal Emotion Recognition,Multimodal Speech,Deep Learning,Deep Learning Models,Multiple Modalities,Motion Capture,Self-supervised Learning,Emotion Recognition Task,Feature-level Fusion,Contralateral,Convolutional Layers,Feature Values,Adam Optimizer,Confusion Matrix,Dense Layer,Speech Recognition,Stochastic Gradient Descent,Softmax Function,Multimodal Model,Emotion Categories,Self-attention Layer,Text Modality,Fusion Approach,Softmax Activation,Dropout Layer,Classification Head,Multimodal System,Bidirectional Long Short-term Memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要