MonoLSS: Learnable Sample Selection For Monocular 3D Detection
CoRR(2023)
摘要
In the field of autonomous driving, monocular 3D detection is a critical task
which estimates 3D properties (depth, dimension, and orientation) of objects in
a single RGB image. Previous works have used features in a heuristic way to
learn 3D properties, without considering that inappropriate features could have
adverse effects. In this paper, sample selection is introduced that only
suitable samples should be trained to regress the 3D properties. To select
samples adaptively, we propose a Learnable Sample Selection (LSS) module, which
is based on Gumbel-Softmax and a relative-distance sample divider. The LSS
module works under a warm-up strategy leading to an improvement in training
stability. Additionally, since the LSS module dedicated to 3D property sample
selection relies on object-level features, we further develop a data
augmentation method named MixUp3D to enrich 3D property samples which conforms
to imaging principles without introducing ambiguity. As two orthogonal methods,
the LSS module and MixUp3D can be utilized independently or in conjunction.
Sufficient experiments have shown that their combined use can lead to
synergistic effects, yielding improvements that transcend the mere sum of their
individual applications. Leveraging the LSS module and the MixUp3D, without any
extra data, our method named MonoLSS ranks 1st in all three categories (Car,
Cyclist, and Pedestrian) on KITTI 3D object detection benchmark, and achieves
competitive results on both the Waymo dataset and KITTI-nuScenes cross-dataset
evaluation. The code is included in the supplementary material and will be
released to facilitate related academic and industrial studies.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要