SAID-NeRF: Segmentation-AIDed NeRF for Depth Completion of Transparent Objects
CoRR(2024)
摘要
Acquiring accurate depth information of transparent objects using
off-the-shelf RGB-D cameras is a well-known challenge in Computer Vision and
Robotics. Depth estimation/completion methods are typically employed and
trained on datasets with quality depth labels acquired from either simulation,
additional sensors or specialized data collection setups and known 3d models.
However, acquiring reliable depth information for datasets at scale is not
straightforward, limiting training scalability and generalization. Neural
Radiance Fields (NeRFs) are learning-free approaches and have demonstrated wide
success in novel view synthesis and shape recovery. However, heuristics and
controlled environments (lights, backgrounds, etc) are often required to
accurately capture specular surfaces. In this paper, we propose using Visual
Foundation Models (VFMs) for segmentation in a zero-shot, label-free way to
guide the NeRF reconstruction process for these objects via the simultaneous
reconstruction of semantic fields and extensions to increase robustness. Our
proposed method Segmentation-AIDed NeRF (SAID-NeRF) shows significant
performance on depth completion datasets for transparent objects and robotic
grasping.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要