EffiVED:Efficient Video Editing via Text-instruction Diffusion Models
arxiv(2024)
摘要
Large-scale text-to-video models have shown remarkable abilities, but their
direct application in video editing remains challenging due to limited
available datasets. Current video editing methods commonly require per-video
fine-tuning of diffusion models or specific inversion optimization to ensure
high-fidelity edits. In this paper, we introduce EffiVED, an efficient
diffusion-based model that directly supports instruction-guided video editing.
To achieve this, we present two efficient workflows to gather video editing
pairs, utilizing augmentation and fundamental vision-language techniques. These
workflows transform vast image editing datasets and open-world videos into a
high-quality dataset for training EffiVED. Experimental results reveal that
EffiVED not only generates high-quality editing videos but also executes
rapidly. Finally, we demonstrate that our data collection method significantly
improves editing performance and can potentially tackle the scarcity of video
editing data. The datasets will be made publicly available upon publication.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要