Surface-Aligned Neural Radiance Fields for Controllable 3D Human Synthesis

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 31|浏览26
暂无评分
摘要
We propose a new method for reconstructing control-lable implicit 3D human models from sparse multi-view RGB videos. Our method defines the neural scene repre-sentation on the mesh surface points and signed distances from the surface of a human body mesh. We identify an indistinguishability issue that arises when a point in 3D space is mapped to its nearest surface point on a mesh for learning surface-aligned neural scene representation. To address this issue, we propose projecting a point onto a mesh surface using a barycentric interpolation with modi-fied vertex normals. Experiments with the ZJU-MoCap and Human3.6M datasets show that our approach achieves a higher quality in a novel-view and novel-pose synthesis than existing methods. We also demonstrate that our method eas-ily supports the control of body shape and clothes. Project page: https://pfnet-research.github.io/surface-aligned-nerf/
更多
查看译文
关键词
3D from multi-view and sensors, Image and video synthesis and generation, Machine learning, Motion and tracking, Pose estimation and tracking
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要