Radar-Lidar Fusion for Classification of Traffic Signaling Motion in Automotive Applications

2023 IEEE International Radar Conference (RADAR)(2023)

引用 0|浏览0
暂无评分
摘要
Advanced driver-assisted system (ADAS) uses multiple sensors such as Radar, Lidar, or Cameras in vehicles to create a robust perception against challenging weather conditions and individual sensor failures. In typical conditions, Lidar and Camera can perceive the surrounding much better than the radar whereas, under low light or extreme weather conditions (fog, rain, snow) the radar outperforms both as it works independently of light source. These sensors in the ADAS system help to minimize driving errors by providing necessary information to the driver or taking automatic actions based on what it perceives. However, in some unstructured environments, which do not have any operational traffic lights present, a person via appropriate gesturing directs the traffic. The task of autonomous vehicles recognizing human body language and gestures in traffic-directing scenarios is significantly difficult. To overcome this challenge, based on the US traffic system, we present a new dataset collected of traffic signaling motions using millimeter-wave (mmWave) radar, camera, Lidar, and motion-capture system. Initial classification results from Radar microDoppler ($\mu$-D) signature and Lidar data analysis using Multimodal Neural Network demonstrate that sensor fusion can not only very accurately (around 98%) classify traffic signaling motions in automotive applications but also outperforms the radar-based and lidar-based classification by around 7% and 4% respectively.
更多
查看译文
关键词
Multimodal Neural Network,autonomy,traffic gesture classification,mmWave,ADAS,CNN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要