Robust and Accurate 3D Self-Portraits in Seconds

IEEE Transactions on Pattern Analysis and Machine Intelligence(2022)

引用 1|浏览68
暂无评分
摘要
In this paper, we propose an efficient method for robust and accurate 3D self-portraits using a single RGBD camera. Our method can generate detailed and realistic 3D self-portraits in seconds and shows the ability to handle subjects wearing extremely loose clothes. To achieve highly efficient and robust reconstruction, we propose PIFusion, which combines learning-based 3D recovery with volumetric non-rigid fusion to generate accurate sparse partial scans of the subject. Meanwhile, a non-rigid volumetric deformation method is proposed to continuously refine the learned shape prior. Moreover, a lightweight bundle adjustment algorithm is proposed to guarantee that all the partial scans can not only “loop” with each other but also remain consistent with the selected live key observations. Finally, to further generate realistic portraits, we propose non-rigid texture optimization to improve the texture quality. Additionally, we also contribute a benchmark for single-view 3D self-portrait reconstruction, an evaluation dataset that contains 10 single-view RGBD sequences of a self-rotating performer wearing various clothes and the corresponding ground-truth 3D models in the first frame of each sequence. The results and experiments based on this dataset show that the proposed method outperforms state-of-the-art methods on accuracy, efficiency, and generality.
更多
查看译文
关键词
3D human reconstruction,RGBD sensor,volumetric fusion,bundle adjustment,texture optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要