MDEmoNet: A Multimodal Driver Emotion Recognition Network for Smart Cockpit.

Chenhao Hu, Shenyu Gu, Mengjie Yang, Gang Han,Chun Sing Lai,Mingyu Gao, Zhexun Yang,Guojin Ma

IEEE International Conference on Consumer Electronics(2024)

引用 0|浏览2
暂无评分
摘要
The automotive smart cockpit is an intelligent and connected in-vehicle consumer electronics product. It can provide a safe, efficient, comfortable, and enjoyable human-machine interaction experience. Emotion recognition technology can help the smart cockpit better understand the driver’s needs and state, improve the driving experience, and enhance safety. Currently, driver emotion recognition faces some challenges, such as low accuracy and high latency. In this paper, we propose a multimodal driver emotion recognition model. To our best knowledge, it is the first time to improve the accuracy of driver emotion recognition by using facial video and driving behavior (including brake pedal force, vehicle Y-axis position and Z-axis position) as inputs and employing a multi-task training approach. For verification, the proposed scheme is compared with some mainstream state-of-the-art methods on the publicly available multimodal driver emotion dataset PPB-Emo.
更多
查看译文
关键词
smart cockpit,driver emotion recognition,deep learning,multimodal fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要