Unified Diffusion-Based Rigid and Non-Rigid Editing with Text and Image Guidance
CoRR(2024)
摘要
Existing text-to-image editing methods tend to excel either in rigid or
non-rigid editing but encounter challenges when combining both, resulting in
misaligned outputs with the provided text prompts. In addition, integrating
reference images for control remains challenging. To address these issues, we
present a versatile image editing framework capable of executing both rigid and
non-rigid edits, guided by either textual prompts or reference images. We
leverage a dual-path injection scheme to handle diverse editing scenarios and
introduce an integrated self-attention mechanism for fusion of appearance and
structural information. To mitigate potential visual artifacts, we further
employ latent fusion techniques to adjust intermediate latents. Compared to
previous work, our approach represents a significant advance in achieving
precise and versatile image editing. Comprehensive experiments validate the
efficacy of our method, showcasing competitive or superior results in
text-based editing and appearance transfer tasks, encompassing both rigid and
non-rigid settings.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要