Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of Articulated Objects

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 27|浏览48
暂无评分
摘要
Rendering articulated objects while controlling their poses is critical to applications such as virtual reality or animation for movies. Manipulating the pose of an object, however, requires the understanding of its underlying structure, that is, its joints and how they interact with each other. Unfortunately, assuming the structure to be known, as existing methods do, precludes the ability to work on new object categories. We propose to learn both the appearance and the structure of previously unseen articulated objects by ob-serving them move from multiple views, with no joints annotation supervision, or information about the structure. We observe that 3D points that are static relative to one another should belong to the same part, and that adjacent parts that move relative to each other must be connected by a joint. To leverage this insight, we model the object parts in 3D as ellipsoids, which allows us to identify joints. We combine this explicit representation with an implicit one that compensates for the approximation introduced. We show that our method works for different structures, from quadrupeds, to single-arm robots, to humans. The code is available at https://github.com/NVlabs/watch-it-move and a version of this manuscript that uses animations is at https://arxiv.org/abs/2112.11347
更多
查看译文
关键词
Image and video synthesis and generation, Computational photography, Motion and tracking
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要