ScanTalk: 3D Talking Heads from Unregistered Scans
arxiv(2024)
摘要
Speech-driven 3D talking heads generation has emerged as a significant area
of interest among researchers, presenting numerous challenges. Existing methods
are constrained by animating faces with fixed topologies, wherein point-wise
correspondence is established, and the number and order of points remains
consistent across all identities the model can animate. In this work, we
present ScanTalk, a novel framework capable of animating 3D faces in arbitrary
topologies including scanned data. Our approach relies on the DiffusionNet
architecture to overcome the fixed topology constraint, offering promising
avenues for more flexible and realistic 3D animations. By leveraging the power
of DiffusionNet, ScanTalk not only adapts to diverse facial structures but also
maintains fidelity when dealing with scanned data, thereby enhancing the
authenticity and versatility of generated 3D talking heads. Through
comprehensive comparisons with state-of-the-art methods, we validate the
efficacy of our approach, demonstrating its capacity to generate realistic
talking heads comparable to existing techniques. While our primary objective is
to develop a generic method free from topological constraints, all
state-of-the-art methodologies are bound by such limitations. Code for
reproducing our results, and the pre-trained model will be made available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要