Towards Variable and Coordinated Holistic Co-Speech Motion Generation
CVPR 2024(2024)
摘要
This paper addresses the problem of generating lifelike holistic co-speech
motions for 3D avatars, focusing on two key aspects: variability and
coordination. Variability allows the avatar to exhibit a wide range of motions
even with similar speech content, while coordination ensures a harmonious
alignment among facial expressions, hand gestures, and body poses. We aim to
achieve both with ProbTalk, a unified probabilistic framework designed to
jointly model facial, hand, and body movements in speech. ProbTalk builds on
the variational autoencoder (VAE) architecture and incorporates three core
designs. First, we introduce product quantization (PQ) to the VAE, which
enriches the representation of complex holistic motion. Second, we devise a
novel non-autoregressive model that embeds 2D positional encoding into the
product-quantized representation, thereby preserving essential structure
information of the PQ codes. Last, we employ a secondary stage to refine the
preliminary prediction, further sharpening the high-frequency details. Coupling
these three designs enables ProbTalk to generate natural and diverse holistic
co-speech motions, outperforming several state-of-the-art methods in
qualitative and quantitative evaluations, particularly in terms of realism. Our
code and model will be released for research purposes at
https://feifeifeiliu.github.io/probtalk/.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要