DiffusionAtlas: High-Fidelity Consistent Diffusion Video Editing
CoRR(2023)
摘要
We present a diffusion-based video editing framework, namely DiffusionAtlas,
which can achieve both frame consistency and high fidelity in editing video
object appearance. Despite the success in image editing, diffusion models still
encounter significant hindrances when it comes to video editing due to the
challenge of maintaining spatiotemporal consistency in the object's appearance
across frames. On the other hand, atlas-based techniques allow propagating
edits on the layered representations consistently back to frames. However, they
often struggle to create editing effects that adhere correctly to the
user-provided textual or visual conditions due to the limitation of editing the
texture atlas on a fixed UV mapping field. Our method leverages a
visual-textual diffusion model to edit objects directly on the diffusion
atlases, ensuring coherent object identity across frames. We design a loss term
with atlas-based constraints and build a pretrained text-driven diffusion model
as pixel-wise guidance for refining shape distortions and correcting texture
deviations. Qualitative and quantitative experiments show that our method
outperforms state-of-the-art methods in achieving consistent high-fidelity
video-object editing.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要