Wav2Lip-HR: Synthesising clear high-resolution talking head in the wild

Chao Liang,Qinghua Wang,Yunlin Chen, Minjie Tang

COMPUTER ANIMATION AND VIRTUAL WORLDS(2024)

引用 0|浏览8
暂无评分
摘要
Talking head generation aims to synthesize a photo-realistic speaking video with accurate lip motion. While this field has attracted more attention in recent audio-visual researches, most existing methods do not achieve the simultaneous improvement of lip synchronization and visual quality. In this paper, we propose Wav2Lip-HR, a neural-based audio-driven high-resolution talking head generation method. With our technique, all required to generate a clear high-resolution lip sync talking video is an image/video of the target face and an audio clip of any speech. The primary benefit of our method is that it generates clear high-resolution videos with sufficient facial details, rather than the ones just be large-sized with less clarity. We first analyze key factors that limit the clarity of generated videos and then put forth several important solutions to address the problem, including data augmentation, model structure improvement and a more effective loss function. Finally, we employ several efficient metrics to evaluate the clarity of images generated by our proposed approach as well as several widely used metrics to evaluate lip-sync performance. Numerous experiments demonstrate that our method has superior performance on visual quality and lip synchronization when compared to other existing schemes. Our proposed Wav2Lip-HR produces clear, high-resolution talking videos in real-time. All required is a portrait and a clip of speech, and the generated video is completely matched with the input audio.image
更多
查看译文
关键词
audio-driven,cross modal,talking-head generation,visual quality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要