Outline-Guided Object Inpainting with Diffusion Models
CoRR(2024)
摘要
Instance segmentation datasets play a crucial role in training accurate and
robust computer vision models. However, obtaining accurate mask annotations to
produce high-quality segmentation datasets is a costly and labor-intensive
process. In this work, we show how this issue can be mitigated by starting with
small annotated instance segmentation datasets and augmenting them to
effectively obtain a sizeable annotated dataset. We achieve that by creating
variations of the available annotated object instances in a way that preserves
the provided mask annotations, thereby resulting in new image-mask pairs to be
added to the set of annotated images. Specifically, we generate new images
using a diffusion-based inpainting model to fill out the masked area with a
desired object class by guiding the diffusion through the object outline. We
show that the object outline provides a simple, but also reliable and
convenient training-free guidance signal for the underlying inpainting model
that is often sufficient to fill out the mask with an object of the correct
class without further text guidance and preserve the correspondence between
generated images and the mask annotations with high precision. Our experimental
results reveal that our method successfully generates realistic variations of
object instances, preserving their shape characteristics while introducing
diversity within the augmented area. We also show that the proposed method can
naturally be combined with text guidance and other image augmentation
techniques.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要