Better Call SAL: Towards Learning to Segment Anything in Lidar
arxiv(2024)
摘要
We propose (egment nything in
idar) method consisting of a text-promptable zero-shot model for
segmenting and classifying any object in Lidar, and a pseudo-labeling engine
that facilitates model training without manual supervision. While the
established paradigm for Lidar Panoptic Segmentation (LPS) relies on
manual supervision for a handful of object classes defined a priori, we utilize
2D vision foundation models to generate 3D supervision "for free". Our
pseudo-labels consist of instance masks and corresponding CLIP tokens, which we
lift to Lidar using calibrated multi-modal data. By training our model on these
labels, we distill the 2D foundation models into our Lidar
model. Even without manual labels, our model achieves 91% in terms of
class-agnostic segmentation and 44% in terms of zero-shot LPS of the fully
supervised state-of-the-art. Furthermore, we outperform several baselines that
do not distill but only lift image features to 3D. More importantly, we
demonstrate that supports arbitrary class prompts, can be easily
extended to new datasets, and shows significant potential to improve with
increasing amounts of self-labeled data.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要