Face Animation with an Attribute-Guided Diffusion Model

CoRR(2023)

引用 4|浏览82
暂无评分
摘要
Face animation has achieved much progress in computer vision. However, prevailing GAN-based methods suffer from unnatural distortions and artifacts due to sophisticated motion deformation. In this paper, we propose a Face Animation framework with an attribute-guided Diffusion Model (FADM), which is the first work to exploit the superior modeling capacity of diffusion models for photo-realistic talking-head generation. To mitigate the uncontrollable synthesis effect of the diffusion model, we design an Attribute-Guided Conditioning Network (AGCN) to adaptively combine the coarse animation features and 3D face reconstruction results, which can incorporate appearance and motion conditions into the diffusion process. These specific designs help FADM rectify unnatural artifacts and distortions, and also enrich high-fidelity facial details through iterative diffusion refinements with accurate animation attributes. FADM can flexibly and effectively improve existing animation videos. Extensive experiments on widely used talking-head benchmarks validate the effectiveness of FADM over prior arts. The source code is available in https://github.com/zengbohan0217/FADM.
更多
查看译文
关键词
3D face reconstruction results,accurate animation attributes,Attribute-Guided Conditioning Network,Attribute-Guided Diffusion Model,attribute-guided Diffusion Model,coarse animation features,diffusion process,existing animation videos,Face Animation framework,FADM,GAN-based methods,iterative diffusion refinements,motion conditions,photorealistic talking-head generation,sophisticated motion deformation,superior modeling capacity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要