Dubbing for Everyone: Data-Efficient Visual Dubbing using Neural Rendering Priors
CoRR(2024)
摘要
Visual dubbing is the process of generating lip motions of an actor in a
video to synchronise with given audio. Recent advances have made progress
towards this goal but have not been able to produce an approach suitable for
mass adoption. Existing methods are split into either person-generic or
person-specific models. Person-specific models produce results almost
indistinguishable from reality but rely on long training times using large
single-person datasets. Person-generic works have allowed for the visual
dubbing of any video to any audio without further training, but these fail to
capture the person-specific nuances and often suffer from visual artefacts. Our
method, based on data-efficient neural rendering priors, overcomes the
limitations of existing approaches. Our pipeline consists of learning a
deferred neural rendering prior network and actor-specific adaptation using
neural textures. This method allows for high-quality visual dubbing
with just a few seconds of data, that enables video dubbing for any actor -
from A-list celebrities to background actors. We show that we achieve
state-of-the-art in terms of visual quality and
recognisability both quantitatively, and qualitatively through two
user studies. Our prior learning and adaptation method generalises to
limited data better and is more scalable than existing
person-specific models. Our experiments on real-world, limited data scenarios
find that our model is preferred over all others. The project page may be found
at https://dubbingforeveryone.github.io/
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要