DREAM-Talk: Diffusion-based Realistic Emotional Audio-driven Method for Single Image Talking Face Generation
CoRR(2023)
摘要
The generation of emotional talking faces from a single portrait image
remains a significant challenge. The simultaneous achievement of expressive
emotional talking and accurate lip-sync is particularly difficult, as
expressiveness is often compromised for the accuracy of lip-sync. As widely
adopted by many prior works, the LSTM network often fails to capture the
subtleties and variations of emotional expressions. To address these
challenges, we introduce DREAM-Talk, a two-stage diffusion-based audio-driven
framework, tailored for generating diverse expressions and accurate lip-sync
concurrently. In the first stage, we propose EmoDiff, a novel diffusion module
that generates diverse highly dynamic emotional expressions and head poses in
accordance with the audio and the referenced emotion style. Given the strong
correlation between lip motion and audio, we then refine the dynamics with
enhanced lip-sync accuracy using audio features and emotion style. To this end,
we deploy a video-to-video rendering module to transfer the expressions and lip
motions from our proxy 3D avatar to an arbitrary portrait. Both quantitatively
and qualitatively, DREAM-Talk outperforms state-of-the-art methods in terms of
expressiveness, lip-sync accuracy and perceptual quality.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要