CustomSketching: Sketch Concept Extraction for Sketch-based Image Synthesis and Editing
CoRR(2024)
摘要
Personalization techniques for large text-to-image (T2I) models allow users
to incorporate new concepts from reference images. However, existing methods
primarily rely on textual descriptions, leading to limited control over
customized images and failing to support fine-grained and local editing (e.g.,
shape, pose, and details). In this paper, we identify sketches as an intuitive
and versatile representation that can facilitate such control, e.g., contour
lines capturing shape information and flow lines representing texture. This
motivates us to explore a novel task of sketch concept extraction: given one or
more sketch-image pairs, we aim to extract a special sketch concept that
bridges the correspondence between the images and sketches, thus enabling
sketch-based image synthesis and editing at a fine-grained level. To accomplish
this, we introduce CustomSketching, a two-stage framework for extracting novel
sketch concepts. Considering that an object can often be depicted by a contour
for general shapes and additional strokes for internal details, we introduce a
dual-sketch representation to reduce the inherent ambiguity in sketch
depiction. We employ a shape loss and a regularization loss to balance fidelity
and editability during optimization. Through extensive experiments, a user
study, and several applications, we show our method is effective and superior
to the adapted baselines.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要