The fusion of multi-modal perception in autonomous driving plays a pivotal role in vehicle behavior decision"/>

GRC-Net: Fusing GAT-Based 4D Radar and Camera for 3D Object Detection

SAE Technical Paper Series(2023)

引用 0|浏览1
暂无评分
摘要
The fusion of multi-modal perception in autonomous driving plays a pivotal role in vehicle behavior decision-making. However, much of the previous research has predominantly focused on the fusion of Lidar and cameras. Although Lidar offers an ample supply of point cloud data, its high cost and the substantial volume of point cloud data can lead to computational delays. Consequently, investigating perception fusion under the context of 4D millimeter-wave radar is of paramount importance for cost reduction and enhanced safety. Nevertheless, 4D millimeter-wave radar faces challenges including sparse point clouds, limited information content, and a lack of fusion strategies. In this paper, we introduce, for the first time, an approach that leverages Graph Neural Networks to assist in expressing features from 4D millimeter-wave radar point clouds. This approach effectively extracts unstructured point cloud features, addressing the loss of object detection due to sparsity. Additionally, we propose the Multi-Modal Fusion Module (MMFM), which aligns and fuses features from graphs, radar pseudo-images generated from Pillars, and camera images within a geometric space. We validate our model using the View-of-Delft (VoD) dataset. Experimental results demonstrate that the proposed method efficiently fuses camera and 4D radar features, resulting in enhanced 3D detection performance.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要