Radar-Enhanced Image Fusion-based Object Detection for Autonomous Driving.

Yaqing Gu, Shiyuan Meng,Kun Shi


Cited 2|Views10
No score
Accurate and robust object detection is imperative to the implementation of autonomous driving. In real-world scenarios, the effectiveness of image-based detectors is limited by low visibility or harsh circumstances. Owing to the immunity to environmental variability, millimeter-wave (mmWave) radar sensors are complementary to camera sensors, opening up the possibility of radar-camera fusion to improve object detection performance. In this paper, we construct a Radar-Enhanced image Fusion Network (REFNet) for 2D object detection in autonomous driving. Specifically, the radar data is projected onto the camera image plane to unify the data format of heterogeneous sensing modalities. To overcome the sparsity of radar point clouds, we devise an Uncertainty Radar Block (URB) to increase the density of radar points considering the azimuth uncertainty of radar measurements. Additionally, we design an adaptive network architecture which supports multi-level fusion and has the ability to determine the optimal fusion level. Moreover, we incorporate a robust attention module within the fusion network to exploit the synergy of radar and camera information. Evaluated with the canonical nuScenes dataset, our proposed method consistently and significantly outperforms the image-only version under all scenarios, especially in nightly and rainy conditions.
Translated text
Key words
object detection,radar-enhanced,fusion-based
AI Read Science
Must-Reading Tree
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined