Articulated Motion-Aware NeRF for 3D Dynamic Appearance and Geometry Reconstruction by Implicit Motion States.

Yahao Shi, Ye Tao, Mingjia Yang,Yun Liu,Li Yi,Bin Zhou

IEEE transactions on visualization and computer graphics(2024)

Cited 0|Views18
No score
Abstract
We propose a self-supervised approach for 3D dynamic reconstruction of articulated motions based on Generative Adversarial Networks and Neural Radiance Fields. Our method reconstructs articulated objects and recover their continuous motions and attributes from an unordered, discontinuous image set. Notably, we treat motion states as time-independent, recognizing that articulated objects can exhibit identical motions at different times. The key insight of our approach utilizes generative adversarial networks to create a continuous implicit motion state space. Initially, we employ a motion network extracts discrete motion states from images as anchors. These anchors are then expanded across the latent space using generative adversarial networks. Subsequently, motion state latent codes are input into motion-aware neural radiance fields for dynamic appearance and geometry reconstruction. To deduce motion attributes from the continuously generated motions, we adopt a cluster-based strategy. We thoroughly evaluate and validate our method on both synthesized and real data, demonstrating superior fidelity in appearances, geometries, and motion attributes of articulated objects compared to state-of-the-art methods.
More
Translated text
Key words
Articulated motion,image-based rendering,object reconstruction
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined