Partial-to-Partial Point Generation Network for Point Cloud Completion

IEEE ROBOTICS AND AUTOMATION LETTERS(2022)

引用 2|浏览11
暂无评分
摘要
Point cloud completion aims at predicting dense complete 3D shapes from sparse incomplete point clouds captured from 3D sensors or scanners. It plays an essential role in various applications such as autonomous driving, 3D reconstruction, augmented reality, and robot navigation. Existing point cloud completion methods follow the encoder-decoder paradigm, in which the complete point clouds are recovered in a coarse-to-fine strategy. However, only using the global feature is difficult and will lead to blurring of the global structure and distortion of local details. To address this problem, we propose a novel Partial-to-Partial Point Generation Network (P(2)GNet), a learning-based approach for point cloud completion. In P(2)GNet, we use a feature disentangle encoder to obtain the global feature, and missing code and novel view partial point cloud are generated conditioned on the view-related missing code. To better aggregate partial point clouds, an attentive sampling module is proposed to sample multiple partial point clouds into the final complete result. Extensive experiments on several public benchmarks demonstrate that our P(2)GNet outperforms state-of-the-art point cloud completion methods.
更多
查看译文
关键词
Computer Vision for Automation, Deep Learning for Visual Perception, Point Cloud Completion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要