RadarCam-Depth: Radar-Camera Fusion for Depth Estimation with Learned Metric Scale
CoRR(2024)
摘要
We present a novel approach for metric dense depth estimation based on the
fusion of a single-view image and a sparse, noisy Radar point cloud. The direct
fusion of heterogeneous Radar and image data, or their encodings, tends to
yield dense depth maps with significant artifacts, blurred boundaries, and
suboptimal accuracy. To circumvent this issue, we learn to augment versatile
and robust monocular depth prediction with the dense metric scale induced from
sparse and noisy Radar data. We propose a Radar-Camera framework for highly
accurate and fine-detailed dense depth estimation with four stages, including
monocular depth prediction, global scale alignment of monocular depth with
sparse Radar points, quasi-dense scale estimation through learning the
association between Radar points and image patches, and local scale refinement
of dense depth using a scale map learner. Our proposed method significantly
outperforms the state-of-the-art Radar-Camera depth estimation methods by
reducing the mean absolute error (MAE) of depth estimation by 25.6
on the challenging nuScenes dataset and our self-collected ZJU-4DRadarCam
dataset, respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要