SKETCHCREATOR: Text-Guided Diffusion Models for Vectorized Sektch Generation and Editing.

Ting Lu, Wangchenhui Wu,Qiang Wang,Haoge Deng,Di Kong,Yonggang Qi

2023 8th IEEE International Conference on Network Intelligence and Digital Content (IC-NIDC)(2023)

引用 0|浏览2
暂无评分
摘要
We present SketchCreator, a text-to-sketch generative frame-work built on diffusion models, which can produce human sketches given a text description. Specifically, sketches are represented in a sequence of stroke points, where our model aims to directly learn the distribution of these ordered points under the guidance of the prompt, i.e., text description. Uniquely, different from prior arts focusing on single-object sketch generation, our model can flexibly generate both the single sketch object and scene sketches conditioned on the prompt. Particularly, our model can generate the scene sketch uniformly without explicitly determining the layout of the scene, which is typically required by previous works. Consequently, the produced objects in a scene sketch are more reasonably organized and visually appealing. Additionally, our model can be readily applied to text-conditioned sketch editing which is of great practical usage. Experimental results on QuickDraw and FS-COCO validate the effectiveness of our model.
更多
查看译文
关键词
Text-to-Sketch,Diffusion Models,Generative Model,Deep Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要