Patch-based Progressive 3D Point Set Upsampling

CVPR, Volume abs/1811.11286, 2019, Pages 5958-5967.

Cited by: 23|Bibtex|Views94|Links
EI
Keywords:
high resolution point setmultilayer perceptronspoint setneural network3d point setMore(13+)
Weibo:
We propose a progressive point set upsampling network that reveals detailed geometric structures from sparse and noisy inputs

Abstract:

We present a detail-driven deep neural network for point set upsampling. A high-resolution point set is essential for point-based rendering and surface reconstruction. Inspired by the recent success of neural image super-resolution techniques, we progressively train a cascade of patch-based upsampling networks on different levels of detai...More

Code:

Data:

0
Introduction
  • The success of neural super-resolution techniques in image space encourages the development of upsampling methods for 3D point sets.
  • Dealing with 3D point sets, is challenging since, unlike images, the data is unstructured and irregular [3,17,19,34,55].
  • Upsampling techniques are important, and yet the adaption of image-space techniques to point sets is far from straightforward.
Highlights
  • The success of neural super-resolution techniques in image space encourages the development of upsampling methods for 3D point sets
  • We present a patch-based progressive upsampling network for point sets
  • We focus on point cloud upsampling and propose intra-level and interlevel point-based skip-connections
  • Our experiments show that the proposed feature expansion method results in a well distributed point set without using an additional loss
  • We propose a series of architectural improvements, including novel dense connections for point-wise feature extraction, code assignment for feature expansion, as well as bilateral feature interpolation for inter-level feature propagation. These improvements contribute to further performance boost and significantly improved parameter efficiency
  • We propose a progressive point set upsampling network that reveals detailed geometric structures from sparse and noisy inputs
Methods
  • Given an unordered set of 3D points, the network generates a denser point set that lies on the underlying surface.
  • This problem is challenging when the point set is relatively sparse, or when the underlying surface has complex geometric and topological structures.
  • The authors propose an end-to-end progressive learning technique for point set upsampling.
  • Upsampling from 625 and 5000 input points tested on Sketchfab dataset. table toilet
Results
  • Qualitative and quantitative experiments show that the method significantly outperforms the state-of-theart learning-based [58, 59], and optimazation-based [23].
  • The authors propose a series of architectural improvements, including novel dense connections for point-wise feature extraction, code assignment for feature expansion, as well as bilateral feature interpolation for inter-level feature propagation
  • These improvements contribute to further performance boost and significantly improved parameter efficiency
Conclusion
  • The authors propose a progressive point set upsampling network that reveals detailed geometric structures from sparse and noisy inputs.
  • The authors train the network step by step, where each step specializes in a certain level of detail.
  • Geometric details by reducing the spatial span as the scope of the receptive field shrinks.
  • Such adaptive patch-based architecture enables them to train on high-resolution point sets in an end-to-end fashion.
  • Extensive experiments and studies demonstrate the superiority of the method compared with the state-of-the-art techniques
Summary
  • Introduction:

    The success of neural super-resolution techniques in image space encourages the development of upsampling methods for 3D point sets.
  • Dealing with 3D point sets, is challenging since, unlike images, the data is unstructured and irregular [3,17,19,34,55].
  • Upsampling techniques are important, and yet the adaption of image-space techniques to point sets is far from straightforward.
  • Objectives:

    In the feature expansion unit, the authors aim to transform the extracted features.
  • Methods:

    Given an unordered set of 3D points, the network generates a denser point set that lies on the underlying surface.
  • This problem is challenging when the point set is relatively sparse, or when the underlying surface has complex geometric and topological structures.
  • The authors propose an end-to-end progressive learning technique for point set upsampling.
  • Upsampling from 625 and 5000 input points tested on Sketchfab dataset. table toilet
  • Results:

    Qualitative and quantitative experiments show that the method significantly outperforms the state-of-theart learning-based [58, 59], and optimazation-based [23].
  • The authors propose a series of architectural improvements, including novel dense connections for point-wise feature extraction, code assignment for feature expansion, as well as bilateral feature interpolation for inter-level feature propagation
  • These improvements contribute to further performance boost and significantly improved parameter efficiency
  • Conclusion:

    The authors propose a progressive point set upsampling network that reveals detailed geometric structures from sparse and noisy inputs.
  • The authors train the network step by step, where each step specializes in a certain level of detail.
  • Geometric details by reducing the spatial span as the scope of the receptive field shrinks.
  • Such adaptive patch-based architecture enables them to train on high-resolution point sets in an end-to-end fashion.
  • Extensive experiments and studies demonstrate the superiority of the method compared with the state-of-the-art techniques
Tables
  • Table1: Quantitative comparison with state-of-the-art approaches for 16×
  • Table2: Quantitative comparison with state-of-the-art approaches on ModelNet10 dataset for 16× upsampling from 625 input points
  • Table3: Ablation study with 16×-upsampling factor tested on the Sketchfab dataset using 625 points as input. We evaluate the contribution of each proposed component quantitatively with Chamfer distance (CD), Hausdorff distance (HD) and mean point-to-surface distance (P2F), and also report the number of parameters in the rightmost column
Download tables as Excel
Related work
  • Optimization-based approaches. Early optimizationbased point set upsampling methods resort to shape priors. Alexa et al [2] insert new points at the vertices of the

    Voronoi diagram, which is computed on the moving least squares (MLS) surface, assuming the underlying surface is smooth. Aiming to preserve sharp edges, Huang et al [23]

    employ an anisotropic locally optimal projection (LOP) operator [22,36] to consolidate and push points away from the edges, followed by a progressive edge-aware upsampling procedure. Wu et al [53] fill points in large areas of missing data by jointly optimizing both the surface and the inner points, using the extracted meso-skeleton to guide the surface point set resampling. These methods rely on the fitting of local geometry, e.g., normal estimation, and struggle with multiscale structure preservation.
Funding
  • We thank the anonymous reviewers for their constructive comments and the SketchFab community for sharing their 200021 162958, ISF grant 2366/16, NSFC (61761146002), LHTD (20170003), and the National Engineering Laboratory for Big Data System Computing Technology. Figure 11: 16× upsampling results from 625 input points (left) and reconstructed mesh (right). Figure 12: 16× upsampling results from 5000 input points (left) and reconstructed mesh (right)
Reference
  • P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. Guibas. Learning representations and generative models for 3D point clouds. Proc. Int. Conf. on Machine Learning, 2018. 2
    Google ScholarLocate open access versionFindings
  • M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and C. T. Silva. Computing and rendering point set surfaces. IEEE Trans. Visualization & Computer Graphics, 9(1):3–15, 2003. 2
    Google ScholarLocate open access versionFindings
  • M. Atzmon, H. Maron, and Y. Lipman. Point convolutional neural networks by extension operators. ACM Trans. on Graphics (Proc. of SIGGRAPH), 2018. 1
    Google ScholarLocate open access versionFindings
  • M. Berger, J. A. Levine, L. G. Nonato, G. Taubin, and C. T. on Graphics, 32(2):20, 2013. 5
    Google ScholarLocate open access versionFindings
  • J.-D. Boissonnat, O. Devillers, S. Pion, M. Teillaud, and M. Yvinec. Triangulations in CGAL. Computational Geometry, 22:5–19, 2002. 5
    Google ScholarLocate open access versionFindings
  • P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, and G. Ranzuglia. Meshlab: an open-source mesh processing tool. In Eurographics Italian Chapter Conference, 2008. 5
    Google ScholarLocate open access versionFindings
  • M. Corsini, P. Cignoni, and R. Scopigno. Efficient and flexible sampling with blue noise properties of triangular meshes. IEEE Trans. Visualization & Computer Graphics, 18(6):914–924, 2012. 5
    Google ScholarLocate open access versionFindings
  • H. Deng, T. Birdal, and S. Ilic. PPF-FoldNet: Unsupervised learning of rotation invariant 3D local descriptors. arXiv preprint arXiv:1808.10322, 2012
    Findings
  • C. Dong, C. C. Loy, K. He, and X. Tang. Image superresolution using deep convolutional networks. IEEE Trans. Pattern Analysis & Machine Intelligence, 38(2):295–307, 2016. 1
    Google ScholarLocate open access versionFindings
  • F. Engelmann, T. Kontogianni, J. Schult, and B. Leibe. Know what your neighbors do: 3D semantic segmentation of point clouds. arXiv preprint arXiv:1810.01151, 2018. 2
    Findings
  • Y. Fan, H. Shi, J. Yu, D. Liu, W. Han, H. Yu, Z. Wang, X. Wang, and T. S. Huang. Balanced two-stage residual networks for image super-resolution. In Proc. IEEE Conf. on 1157–1164. IEEE, 2017. 1, 2
    Google ScholarLocate open access versionFindings
  • M. Gadelha, R. Wang, and S. Maji. Multiresolution tree networks for 3D point cloud processing. arXiv preprint arXiv:1807.03520, 2018. 2
    Findings
  • T. Groueix, M. Fisher, V. G. Kim, B. Russell, and M. Aubry. AtlasNet: A papier-mâché approach to learning 3D surface generation. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, 2018. 2, 4
    Google ScholarLocate open access versionFindings
  • P. Guerrero, Y. Kleiman, M. Ovsjanikov, and N. J. Mitra. PCPNet learning local shape properties from raw point clouds. Computer Graphics Forum, 37(2):75–85, 2018. 2
    Google ScholarLocate open access versionFindings
  • S. Gurumurthy and S. Agrawal. High fidelity semantic shape completion for point clouds using latent optimization. arXiv preprint arXiv:1807.03407, 2018. 2
    Findings
  • K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, pages 770–778, 202, 3
    Google ScholarLocate open access versionFindings
  • P. Hermosilla, T. Ritschel, P.-P. Vazquez, A. Vinacua, and T. Ropinski. Monte carlo convolution for learning on nonuniformly sampled point clouds. ACM Trans. on Graphics (Proc. of SIGGRAPH Asia), 37(6), 2018. 1
    Google ScholarLocate open access versionFindings
  • H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle. Surface reconstruction from unorganized points. Proc. of SIGGRAPH, pages 71–78, 1992. 6
    Google ScholarLocate open access versionFindings
  • B.-S. Hua, M.-K. Tran, and S.-K. Yeung. Pointwise convolutional neural networks. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, pages 984–993, 2018. 1
    Google ScholarLocate open access versionFindings
  • G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Conf. on Computer Vision & Pattern Recognition, 2017. 2, 3
    Google ScholarLocate open access versionFindings
  • G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger. Springer, 2016. 3
    Google ScholarLocate open access versionFindings
  • H. Huang, D. Li, H. Zhang, U. Ascher, and D. Cohen-Or. Asia), 28(5):176:1–176:7, 2009. 2, 5, 7
    Google ScholarLocate open access versionFindings
  • H. Huang, S. Wu, M. Gong, D. Cohen-Or, U. Ascher, and Graphics, 32(1):9:1–9:12, 2013. 1, 2, 5
    Google ScholarLocate open access versionFindings
  • M. Jiang, Y. Wu, and C. Lu. PointSIFT: A SIFT-like network module for 3D point cloud semantic segmentation. arXiv preprint arXiv:1807.00652, 2018. 2
    Findings
  • T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. Proc. Int. Conf. on Learning Representations, 2018. 2, 3
    Google ScholarLocate open access versionFindings
  • M. Kazhdan and H. Hoppe. Screened poisson surface reconstruction. ACM Trans. on Graphics, 32(1):29:1–29:13, 2013.
    Google ScholarLocate open access versionFindings
  • J. Kim, J. Kwon Lee, and K. Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, pages 1646–1654, 2016. 1
    Google ScholarLocate open access versionFindings
  • R. Klokov and V. Lempitsky. Escape from cells: Deep kdnetworks for the recognition of 3D point cloud models. In Proc. Int. Conf. on Computer Vision, pages 863–872. IEEE, 2017. 2
    Google ScholarLocate open access versionFindings
  • A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In In Advances in Neural Information Processing Systems (NIPS), pages 1097–1105, 2012. 2
    Google ScholarLocate open access versionFindings
  • W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang. Deep laplacian pyramid networks for fast and accurate superresolution. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, 2017. 1, 2
    Google ScholarLocate open access versionFindings
  • Y. LeCun and C. Cortes. MNIST handwritten digit database. http://yann.lecun.com/exdb/mnist/, 2010.5
    Findings
  • C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, Photo-realistic single image super-resolution using a generative adversarial network. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, 2017. 1
    Google ScholarLocate open access versionFindings
  • J. Li, B. M. Chen, and G. H. Lee. So-net: Self-organizing network for point cloud analysis. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, pages 9397–9406, 2018. 3
    Google ScholarLocate open access versionFindings
  • Y. Li, R. Bu, M. Sun, and B. Chen. Pointcnn. arXiv preprint arXiv:1801.07791, 2018. 1
    Findings
  • Multi-scale context intertwining for semantic segmentation. In Proc. Euro. Conf. on Computer Vision, pages 603–619, 2018. 3
    Google ScholarLocate open access versionFindings
  • Parameterization-free projection for geometry reconstruction. ACM Trans. on Graphics (Proc. of SIGGRAPH), 26(3):22:1–22:6, 2007. 2
    Google ScholarLocate open access versionFindings
  • X. Liu, Z. Han, Y.-S. Liu, and M. Zwicker. Point2Sequence: Learning the shape representation of 3D point clouds with an attention-based sequence to sequence network. arXiv preprint arXiv:1811.02565, 2018. 2
    Findings
  • Master’s thesis, University of Utah, Department of Mathematics, 1987. 5
    Google ScholarFindings
  • M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014. 4
    Findings
  • C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas. Frustum pointnets for 3D object detection from rgb-d data. arXiv preprint arXiv:1711.08488, 2017. 2
    Findings
  • C. R. Qi, H. Su, K. Mo, and L. J. Guibas. PointNet: Deep learning on point sets for 3D classification and segmentation. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, 2017. 1, 2
    Google ScholarLocate open access versionFindings
  • C. R. Qi, L. Yi, H. Su, and L. J. Guibas. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In In Advances in Neural Information Processing Systems (NIPS), pages 5099–5108, 2017. 1, 3
    Google ScholarLocate open access versionFindings
  • Fully-convolutional point networks for large-scale point clouds. arXiv preprint arXiv:1808.06840, 2018. 2
    Findings
  • Computer-assisted Intervention, pages 234–241. Springer, 2015. 2
    Google ScholarFindings
  • R. Roveri, A. C. Öztireli, I. Pandele, and M. Gross. PointProNets: Consolidation of point clouds with convolutional neural networks. Computer Graphics Forum, 37(2):87–99, 2018. 2
    Google ScholarLocate open access versionFindings
  • Y. Shen, C. Feng, Y. Yang, and D. Tian. Mining point cloud local structures by kernel correlation and graph pooling. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, 2018. 3
    Google ScholarLocate open access versionFindings
  • W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, pages 1874–1883, 2016.
    Google ScholarLocate open access versionFindings
  • Sketchfab. https://sketchfab.com.5
    Findings
  • Trans. on Graphics (Proc. of SIGGRAPH Asia), 2018. 2
    Google ScholarFindings
  • T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and on Computer Vision & Pattern Recognition, 2018. 2
    Google ScholarLocate open access versionFindings
  • Y. Wang, F. Perazzi, B. McWilliams, A. Sorkine-Hornung, O. Sorkine-Hornung, and C. Schroers. A fully progressive approach to single-image super-resolution. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition Workshops, June 2018. 2, 3, 4
    Google ScholarLocate open access versionFindings
  • Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic graph cnn for learning on point clouds. arXiv preprint arXiv:1801.07829, 2018. 3
    Findings
  • S. Wu, H. Huang, M. Gong, M. Zwicker, and D. CohenOr. Deep points consolidation. ACM Trans. on Graphics, 34(6):176, 2015. 2
    Google ScholarLocate open access versionFindings
  • Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and Recognition, pages 1912–1920, 2015. 5
    Google ScholarLocate open access versionFindings
  • Y. Xu, T. Fan, M. Xu, L. Zeng, and Y. Qiao. Spidercnn: Deep learning on point sets with parameterized convolutional filters. Proc. Euro. Conf. on Computer Vision, 2018. 1
    Google ScholarLocate open access versionFindings
  • Y. Yang, C. Feng, Y. Shen, and D. Tian. Foldingnet: Point cloud auto-encoder via deep grid deformation. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, volume 3, 2018. 2, 4
    Google ScholarLocate open access versionFindings
  • K. Yin, H. Huang, D. Cohen-Or, and H. Zhang. P2p-net: bidirectional point displacement net for shape transform. ACM Trans. on Graphics (Proc. of SIGGRAPH), 37(4):152, 2018. 2
    Google ScholarLocate open access versionFindings
  • L. Yu, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng. Ecnet: an edge-aware point set consolidation network. Proc. Euro. Conf. on Computer Vision, 2018. 1, 2, 5
    Google ScholarLocate open access versionFindings
  • L. Yu, X. Li, C.-W. Fu, D. Cohen-Or, and P.-A. Heng. Pu-net: Point cloud upsampling network. In Proc. IEEE Conf. on Computer Vision & Pattern Recognition, pages 2790–2799, 2018. 1, 2, 3, 4, 5
    Google ScholarLocate open access versionFindings
  • W. Yuan, T. Khot, D. Held, C. Mertz, and M. Hebert. Pcn: Point completion network. In Proc. Int. Conf. on 3D Vision, pages 728–737. IEEE, 2018. 2
    Google ScholarLocate open access versionFindings
  • W. Zhang, H. Jiang, Z. Yang, S. Yamakawa, K. Shimada, and L. B. Kara. Data-driven upsampling of point clouds. arXiv preprint arXiv:1807.02740, 2018. 2
    Findings
  • Y. Zhao, G. Li, W. Xie, W. Jia, H. Min, and X. Liu. Gun: Gradual upsampling network for single image superresolution. IEEE Access, 6:39363–39374, 2018. 1, 2
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments