While many works focus on 3D reconstruction from images, in this paper, we focus on 3D shape reconstruction and completion from a variety of 3D inputs, which are deficient in some respect: low and high resolution voxels, sparse and dense point clouds, complete or incomplete
Research on local descriptors for pairwise registration of 3D point clouds is centered on deep learning approaches that succeed in capturing and encoding evidence hidden to hand-engineered descriptors
We presented PointAugment, the first auto-augmentation framework that we are aware of for 3D point clouds, considering both the capability of the classification network and the complexity of the training samples
We propose a feature-metric framework to solve the point cloud registration, and the framework can be trained using a semi-supervised or unsupervised manner
Single-resolution Multi-Layer Perception consists of 5 layers Multi-Layer Perception and 2 linear layers, each layer followed by batch normalization and ReLU activation
The Deep Point Cloud Distance method is based on estimating the distances of points from one cloud to the underlying continuous surface corresponding to the other point cloud
The bidirectional optical flow explicitly guides consecutive sparse depth maps to generate an intermediate depth map, which is further improved by the warping layer
This is the first dataset collected from an AV approved for testing on public roads and that contains the full 360◦ sensor suite. nuScenes has the largest collection of 3D box annotations of any previously released dataset