Learning Implicit Representation for Reconstructing Articulated Objects
CoRR(2024)
摘要
3D Reconstruction of moving articulated objects without additional
information about object structure is a challenging problem. Current methods
overcome such challenges by employing category-specific skeletal models.
Consequently, they do not generalize well to articulated objects in the wild.
We treat an articulated object as an unknown, semi-rigid skeletal structure
surrounded by nonrigid material (e.g., skin). Our method simultaneously
estimates the visible (explicit) representation (3D shapes, colors, camera
parameters) and the implicit skeletal representation, from motion cues in the
object video without 3D supervision. Our implicit representation consists of
four parts. (1) Skeleton, which specifies how semi-rigid parts are connected.
(2) Skinning Weights, which associates each surface vertex
with semi-rigid parts with probability. (3) Rigidity Coefficients, specifying
the articulation of the local surface. (4) Time-Varying Transformations, which
specify the skeletal motion and surface deformation parameters. We introduce an
algorithm that uses physical constraints as regularization terms and
iteratively estimates both implicit and explicit representations. Our method is
category-agnostic, thus eliminating the need for category-specific skeletons,
we show that our method outperforms state-of-the-art across standard video
datasets.
更多查看译文
关键词
3D reconstruction from videos,Articulated Objects
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要