PKU-DyMVHumans: A Multi-View Video Benchmark for High-Fidelity Dynamic Human Modeling
CVPR 2024(2024)
摘要
High-quality human reconstruction and photo-realistic rendering of a dynamic
scene is a long-standing problem in computer vision and graphics. Despite
considerable efforts invested in developing various capture systems and
reconstruction algorithms, recent advancements still struggle with loose or
oversized clothing and overly complex poses. In part, this is due to the
challenges of acquiring high-quality human datasets. To facilitate the
development of these fields, in this paper, we present PKU-DyMVHumans, a
versatile human-centric dataset for high-fidelity reconstruction and rendering
of dynamic human scenarios from dense multi-view videos. It comprises 8.2
million frames captured by more than 56 synchronized cameras across diverse
scenarios. These sequences comprise 32 human subjects across 45 different
scenarios, each with a high-detailed appearance and realistic human motion.
Inspired by recent advancements in neural radiance field (NeRF)-based scene
representations, we carefully set up an off-the-shelf framework that is easy to
provide those state-of-the-art NeRF-based implementations and benchmark on
PKU-DyMVHumans dataset. It is paving the way for various applications like
fine-grained foreground/background decomposition, high-quality human
reconstruction and photo-realistic novel view synthesis of a dynamic scene.
Extensive studies are performed on the benchmark, demonstrating new
observations and challenges that emerge from using such high-fidelity dynamic
data. The dataset is available at: https://pku-dymvhumans.github.io.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要