Multiface: A Dataset for Neural Face Rendering

Cheng-hsin Wuu, Ningyuan Zheng, Scott Ardisson, Rohan Bali,Danielle Belko, Eric Brockmeyer, Lucas Evans, Timothy Godisart, Hyowon Ha, Alexander Hypes, Taylor Koska, Steven Krenn,Stephen Lombardi, Xiaomin Luo, Kevyn McPhail, Laura Millerschoen,Michal Perdoch, Mark Pitts, Alexander Richard,Jason Saragih, Junko Saragih,Takaaki Shiratori,Tomas Simon, Matt Stewart, Autumn Trimble,Xinshuo Weng, David Whitewolf,Chenglei Wu,Shoou-I Yu,Yaser Sheikh大牛学者


引用 1|浏览58
Photorealistic avatars of human faces have come a long way in recent years, yet research along this area is limited by a lack of publicly available, high-quality datasets covering both, dense multi-view camera captures, and rich facial expressions of the captured subjects. In this work, we present Multiface, a new multi-view, high-resolution human face dataset collected from 13 identities at Reality Labs Research for neural face rendering. We introduce Mugsy, a large scale multi-camera apparatus to capture high-resolution synchronized videos of a facial performance. The goal of Multiface is to close the gap in accessibility to high quality data in the academic community and to enable research in VR telepresence. Along with the release of the dataset, we conduct ablation studies on the influence of different model architectures toward the model's interpolation capacity of novel viewpoint and expressions. With a conditional VAE model serving as our baseline, we found that adding spatial bias, texture warp field, and residual connections improves performance on novel view synthesis. Our code and data is available at:
AI 理解论文