Event-Based Vision Enhanced: A Joint Detection Framework In Autonomous Driving

2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)(2019)

引用 36|浏览40
暂无评分
摘要
Due to the high-speed motion blur and low dynamic range, conventional frame-based cameras have encountered an important challenge in object detection, especially in autonomous driving. Event-based cameras, by taking the advantages of high temporal resolution and high dynamic range, have brought a new perspective to address the challenge. Motivated by this fact, this paper proposes a joint framework combining event-based and frame-based vision for vehicle detection. Specially, two separate event-based and frame-based streams are incorporated into a convolutional neural network (CNN). Besides, to accommodate the asynchronous events from event-based cameras, a convolutional spiking neural network (SNN) is utilized to generate visual attention maps so that two streams can be synchronized. Moreover, Dempster-Shafer theory is introduced to merge two outputs from CNN in a joint decision model. The experimental results show that the proposed approach outperforms the state-of-the-art methods only using frame-based information, especially in fast motion and challenging illumination conditions.
更多
查看译文
关键词
Event-based Vision, Neuromorphic Cameras, Convolutional Neural Networks, Spiking Neural Networks, Dempster-Shafer Theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要