GazMon: Eye Gazing Enabled Driving Behavior Monitoring and Prediction

2018 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)(2021)

引用 23|浏览20
暂无评分
摘要
Automobiles have become one of the necessities of modern life, but also introduced numerous traffic accidents that threaten drivers and other road users. Most state-of-the-art safety systems are passively triggered, reacting to dangerous road conditions or driving maneuvers only after they happen and are observed, which greatly limits the last chances for collision avoidances. Timely tracking and predicting the driving maneuvers calls for a more direct interface beyond the traditional steering wheel/brake/gas pedal. In this paper, we argue that a driver's eyes are the interface, as it is the first and the essential window that gathers external information during driving. Our experiments suggest that a driver's gaze patterns appear prior to and correlate with the driving maneuvers for driving maneuver prediction. We accordingly present GazMon, an active driving maneuver monitoring and prediction framework for driving assistance applications. GazMon extracts the gaze information through a front-camera and analyzes the facial features, including facial landmarks, head pose, and iris centers, through a carefully constructed deep learning architecture. Both our on-road experiments and driving simulator based evaluations demonstrate the superiority of our GazMon on predicting driving maneuvers as well as other distracted behaviors. It is readily deployable using RGB cameras and allows reuse of existing smartphones towards more safely driving.
更多
查看译文
关键词
Gaze,driving assistant,mobile computing,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要