Human Gaussian Splatting: Real-time Rendering of Animatable Avatars
CVPR 2024(2023)
摘要
This work addresses the problem of real-time rendering of photorealistic
human body avatars learned from multi-view videos. While the classical
approaches to model and render virtual humans generally use a textured mesh,
recent research has developed neural body representations that achieve
impressive visual quality. However, these models are difficult to render in
real-time and their quality degrades when the character is animated with body
poses different than the training observations. We propose the first animatable
human model based on 3D Gaussian Splatting, that has recently emerged as a very
efficient alternative to neural radiance fields. Our body is represented by a
set of gaussian primitives in a canonical space which are deformed in a coarse
to fine approach that combines forward skinning and local non-rigid refinement.
We describe how to learn our Human Gaussian Splatting (\OURS) model in an
end-to-end fashion from multi-view observations, and evaluate it against the
state-of-the-art approaches for novel pose synthesis of clothed body. Our
method presents a PSNR 1.5dbB better than the state-of-the-art on THuman4
dataset while being able to render at 20fps or more.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要