Neural Cages for Detail-Preserving 3D Deformations

CVPR, pp. 72-80, 2019.

Cited by: 2|Bibtex|Views94|Links
EI
Keywords:
non rigid registration3d shape3d modeldeformation transfer3d mesh modelMore(8+)
Weibo:
Our method succeeds in generating featurepreserving deformations for synthesizing shape variations and deformation transfer, and better preserves salient geometric features than competing methods

Abstract:

We propose a novel learnable representation for detail-preserving shape deformation. The goal of our method is to warp a source shape to match the general structure of a target shape, while preserving the surface details of the source. Our method extends a traditional cage-based deformation technique, where the source shape is enclosed ...More

Code:

Data:

0
Introduction
  • Deformation of 3D shapes is a ubiquitous task, arising in many vision and graphics applications.
  • The second objective is adhering to quality metrics, such as distortion minimization and preservation of local geometric features, such as the human’s face.
  • These two objectives are contradictory, since a perfect alignment of a deformed source shape to the target precludes preserving the original details of the source
Highlights
  • Deformation of 3D shapes is a ubiquitous task, arising in many vision and graphics applications
  • We use cage-based deformations as our representation, where the source shape is enclosed by a coarse cage mesh, and all surface points are written as linear combinations of the cage vertices, i.e., generalized barycentric coordinates
  • Effect of the negative mean value coordinates penalty, LMVC. in Fig 12 we show the effect of penalizing negative mean value coordinates
  • We show that classical cage-based deformation provides a low-dimensional, detail-preserving deformation space di
  • We implement cage weight computation and cage-based deformation as differentiable network layers, which could be used in other architectures
  • Our method succeeds in generating featurepreserving deformations for synthesizing shape variations and deformation transfer, and better preserves salient geometric features than competing methods
Methods
  • Research on detail-preserving deformations in the geometry processing community spans several decades and has contributed various formulations and optimization techniques [24]
  • These methods usually rely on a sparse set of control points whose transformations are interpolated to all remaining points of the shape; the challenge lies in defining this interpolation in a way that preserves details.
  • Many designs have been proposed for these coordinate functions such that shape structure and details are preserved under interpolations [2, 14, 15, 18, 21, 27, 31]
Results
  • The authors study the effects and necessity of the most relevant components of the methods.
  • The authors train the architecture on 300 vase shapes from COSEG [30], while varying the weight αMVC ∈ {0, 1, 10}
  • Increasing this term brings the cages closer to the shapes’ convex hulls, leading to more conservative deformations.
  • Quantitative results in Table 1a suggest that increasing the weight αMVC favors shape preservation over alignment accuracy.
  • Eliminating this term hurts converαMVC = 1
Conclusion
  • The authors show that classical cage-based deformation provides a low-dimensional, detail-preserving deformation space di- Llap Nc = Ours Nc = Identity Source CageCs. Source Deformed Deformed Cage Cs Cage Cs→t Ss→t.
  • The authors' method succeeds in generating featurepreserving deformations for synthesizing shape variations and deformation transfer, and better preserves salient geometric features than competing methods.
  • A limitation of the approach is that the authors focus on the deformation quality produced by the predicted cages: the cage geometry itself is not designed to be comparable to professionally-created cages for 3D artists.
  • For certain types of deformations other parameterizations might be a more natural choice, such as skeleton-based deformation for articulations, the idea presented in this paper can be adopted for
Summary
  • Introduction:

    Deformation of 3D shapes is a ubiquitous task, arising in many vision and graphics applications.
  • The second objective is adhering to quality metrics, such as distortion minimization and preservation of local geometric features, such as the human’s face.
  • These two objectives are contradictory, since a perfect alignment of a deformed source shape to the target precludes preserving the original details of the source
  • Methods:

    Research on detail-preserving deformations in the geometry processing community spans several decades and has contributed various formulations and optimization techniques [24]
  • These methods usually rely on a sparse set of control points whose transformations are interpolated to all remaining points of the shape; the challenge lies in defining this interpolation in a way that preserves details.
  • Many designs have been proposed for these coordinate functions such that shape structure and details are preserved under interpolations [2, 14, 15, 18, 21, 27, 31]
  • Results:

    The authors study the effects and necessity of the most relevant components of the methods.
  • The authors train the architecture on 300 vase shapes from COSEG [30], while varying the weight αMVC ∈ {0, 1, 10}
  • Increasing this term brings the cages closer to the shapes’ convex hulls, leading to more conservative deformations.
  • Quantitative results in Table 1a suggest that increasing the weight αMVC favors shape preservation over alignment accuracy.
  • Eliminating this term hurts converαMVC = 1
  • Conclusion:

    The authors show that classical cage-based deformation provides a low-dimensional, detail-preserving deformation space di- Llap Nc = Ours Nc = Identity Source CageCs. Source Deformed Deformed Cage Cs Cage Cs→t Ss→t.
  • The authors' method succeeds in generating featurepreserving deformations for synthesizing shape variations and deformation transfer, and better preserves salient geometric features than competing methods.
  • A limitation of the approach is that the authors focus on the deformation quality produced by the predicted cages: the cage geometry itself is not designed to be comparable to professionally-created cages for 3D artists.
  • For certain types of deformations other parameterizations might be a more natural choice, such as skeleton-based deformation for articulations, the idea presented in this paper can be adopted for
Tables
  • Table1: We evaluate effect of different losses (LMVC, Lshape) and components (Nc) of our pipeline with respect to chamfer distance (CD, scaled by 102) and cotangent Laplacian (scaled by 103)
  • Table2: The deformation results of the chair category
  • Table3: The deformation results of the table category
  • Table4: The deformation results of the car category
  • Table5: Additional results for humanoid deformation and deformation transfer to new characters. The target poses are shown on the left in green, the template source (fixed during training) and the novel new sources are shown on the top in brown. We exhibit two training results, using a rest pose template source and using a t-pose template source, shown on the left and right half of the table respectively
Download tables as Excel
Related work
  • We now review prior work on learning deformations, traditional methods for shape deformation, and applications.

    Learning 3D deformations. Many recent works in learning 3D geometry have focused on generative tasks, such as synthesis [8, 20] and editing [36] of unstructured geometric data. These tasks are especially challenging if one desires high-fidelity content with intricate details. A common approach to producing intricate shapes is to deform an existing generic [28] or category-specific [7] template. Early approaches represented deformations as a single vector of vertex positions of a template [26], which limited their output to shapes constructable by deforming the specific template, and also made the architecture sensitive to the template tessellation. An alternative is to predict a freeform deformation field over 3D voxels [9, 13, 34]; however, this makes the deformation’s resolution dependent on the voxel resolution, and thus has limited capability to adapt to a specific shape categories and source shapes.
Funding
  • This work was supported in part by gifts from Adobe, Facebook and Snap
Reference
  • Federica Bogo, Javier Romero, Matthew Loper, and Michael J Black. FAUST: Dataset and evaluation for 3D mesh registration. In CVPR, pages 3794–3801, 2014.
    Google ScholarLocate open access versionFindings
  • Stephane Calderon and Tamy Boubekeur. Bounding proxies for shape approximation. ACM Trans. Graph., 36(4):57, 2017.
    Google ScholarLocate open access versionFindings
  • Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An information-rich 3D model repository. Technical Report arXiv:1512.03012 [cs.GR], 2015.
    Findings
  • Lin Gao, Jie Yang, Yi-Ling Qiao, Yu-Kun Lai, Paul L Rosin, Weiwei Xu, and Shihong Xia. Automatic unpaired shape deformation transfer. In SIGGRAPH Asia, 2018.
    Google ScholarLocate open access versionFindings
  • Lin Gao, Jie Yang, Tong Wu, Yu-Jie Yuan, Hongbo Fu, YuKun Lai, and Hao(Richard) Zhang. SDM-NET: Deep generative network for structured deformable mesh. ACM Trans. Graph., 38(6):243:1–243:15, 2019.
    Google ScholarLocate open access versionFindings
  • Thibault Groueix, Matthew Fisher, Vladimir Kim, Bryan Russell, and Mathieu Aubry. Unsupervised cycle-consistent deformation for shape matching. In SGP, 2019.
    Google ScholarLocate open access versionFindings
  • Thibault Groueix, Matthew Fisher, Vladimir G Kim, Bryan C Russell, and Mathieu Aubry. 3D-CODED: 3D correspondences by deep deformation. In ECCV, 2018.
    Google ScholarLocate open access versionFindings
  • Thibault Groueix, Matthew Fisher, Vladimir G Kim, Bryan C Russell, and Mathieu Aubry. AtlasNet: A papiermacheapproach to learning 3D surface generation. In CVPR, 2018.
    Google ScholarLocate open access versionFindings
  • Rana Hanocka, Noa Fish, Zhenhua Wang, Raja Giryes, Shachar Fleishman, and Daniel Cohen-Or. ALIGNet: partial-shape agnostic alignment via unsupervised learning. ACM Trans. Graph., 38(1):1, 2018.
    Google ScholarLocate open access versionFindings
  • Haibin Huang, Evangelos Kalogerakis, Siddhartha Chaudhuri, Duygu Ceylan, Vladimir G Kim, and Ersin Yumer. Learning local shape descriptors from part correspondences with multiview convolutional networks. ACM Trans. Graph., 37(1), 2018.
    Google ScholarLocate open access versionFindings
  • Qixing Huang, Hai Wang, and Vladlen Koltun. Single-view reconstruction via joint analysis of image and shape collections. ACM Trans. Graph., 34(4):87:1–87:10, 2015.
    Google ScholarLocate open access versionFindings
  • Qi-Xing Huang, Bart Adams, Martin Wicke, and Leonidas J. Guibas. Non-rigid registration under isometric deformations. In SGP, 2008.
    Google ScholarLocate open access versionFindings
  • Dominic Jack, Jhony K Pontes, Sridha Sridharan, Clinton Fookes, Sareh Shirazi, Frederic Maire, and Anders Eriksson. Learning free-form deformations for 3D object reconstruction. In ACCV, 2018.
    Google ScholarLocate open access versionFindings
  • Pushkar Joshi, Mark Meyer, Tony DeRose, Brian Green, and Tom Sanocki. Harmonic coordinates for character articulation. ACM Trans. Graph., 26(3), 2007.
    Google ScholarLocate open access versionFindings
  • Tao Ju, Scott Schaefer, and Joe Warren. Mean value coordinates for closed triangular meshes. ACM Trans. Graph., 24(3):561–566, 2005.
    Google ScholarLocate open access versionFindings
  • Hao Li, Linjie Luo, Daniel Vlasic, Pieter Peers, Jovan Popovic, Mark Pauly, and Szymon Rusinkiewicz. Temporally coherent completion of dynamic shapes. ACM Trans. Graph., 31(1), 2012.
    Google ScholarLocate open access versionFindings
  • Hao Li, Robert W. Sumner, and Mark Pauly. Global correspondence optimization for non-rigid registration of depth scans. In SGP, 2008.
    Google ScholarLocate open access versionFindings
  • Yaron Lipman, David Levin, and Daniel Cohen-Or. Green coordinates. ACM Trans. Graph., 27(3), 2008.
    Google ScholarLocate open access versionFindings
  • Shichen Liu, Tianye Li, Weikai Chen, and Hao Li. Soft Rasterizer: A differentiable renderer for image-based 3D reasoning. In ICCV, 2019.
    Google ScholarLocate open access versionFindings
  • Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3D reconstruction in function space. In CVPR, 2019.
    Google ScholarLocate open access versionFindings
  • Leonardo Sacht, Etienne Vouga, and Alec Jacobson. Nested cages. ACM Trans. Graph., 34(6):170:1–170:14, 2015.
    Google ScholarLocate open access versionFindings
  • Tamar Rott Shaham, Tali Dekel, and Tomer Michaeli. Singan: Learning a generative model from a single natural image. In ICCV, 2019.
    Google ScholarLocate open access versionFindings
  • Olga Sorkine and Marc Alexa. As-rigid-as-possible surface modeling. In SGP, 2007.
    Google ScholarLocate open access versionFindings
  • Olga Sorkine and Mario Botsch. Interactive shape modeling and deformation. In EUROGRAPHICS Tutorials, 2009.
    Google ScholarLocate open access versionFindings
  • Robert W. Sumner and Jovan Popovic. Deformation transfer for triangle meshes. ACM Trans. Graph., 23(3):399–405, 2004.
    Google ScholarLocate open access versionFindings
  • Qingyang Tan, Lin Gao, Yu-Kun Lai, and Shihong Xia. Variational autoencoders for deforming 3D mesh models. In CVPR, 2018.
    Google ScholarLocate open access versionFindings
  • Jean-Marc Thiery, Julien Tierny, and Tamy Boubekeur. CageR: Cage-based reverse engineering of animated 3D shapes. Computer Graphics Forum, 31(8):2303–2316, 2012.
    Google ScholarLocate open access versionFindings
  • Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2Mesh: Generating 3D mesh models from single RGB images. In ECCV, 2018.
    Google ScholarLocate open access versionFindings
  • Weiyue Wang, Duygu Ceylan, Radomir Mech, and Ulrich Neumann. 3DN: 3D deformation network. In CVPR, 2019.
    Google ScholarLocate open access versionFindings
  • Zizhao Wu, Ruyang Shou, Yunhai Wang, and Xinguo Liu. Interactive shape co-segmentation via label propagation. Computers & Graphics, 38:248–254, 2014.
    Google ScholarLocate open access versionFindings
  • Chuhua Xian, Hongwei Lin, and Shuming Gao. Automatic cage generation by improved OBBs for mesh deformation. The Visual Computer, 28(1):21–33, 2012.
    Google ScholarLocate open access versionFindings
  • Kai Xu, Honghua Li, Hao Zhang, Daniel Cohen-Or, Yueshan Xiong, and Zhi-Quan Cheng. Style-content separation by anisotropic part scales. ACM Trans. Graph., 29(6):184:1– 184:10, 2010.
    Google ScholarLocate open access versionFindings
  • Kangxue Yin, Zhiqin Chen, Hui Huang, Daniel Cohen-Or, and Hao Zhang. LOGAN: Unpaired shape transform in latent overcomplete space. ACM Trans. Graph., 38(6):198:1– 198:13, 2019.
    Google ScholarLocate open access versionFindings
  • M. E. Yumer and N. J. Mitra. Learning semantic deformation flows with 3D convolutional networks. In ECCV, 2016.
    Google ScholarLocate open access versionFindings
  • Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.
    Google ScholarLocate open access versionFindings
  • Chenyang Zhu, Kai Xu, Siddhartha Chaudhuri, Renjiao Yi, and Hao Zhang. SCORES: Shape composition with recursive substructure priors. ACM Trans. Graph., 37(6), 2018.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Tags
Comments