Decoupled Representation Learning for Character Glyph Synthesis

IEEE TRANSACTIONS ON MULTIMEDIA(2022)

引用 7|浏览35
暂无评分
摘要
Character glyph synthesis is still an open challenging problem, which involves two related aspects, i.e., font style transfer and content consistency. In this paper, we propose a novel model named FontGAN, which integrates the character structure stylization, de-stylization and texture transfer into a unified framework. Specifically, we decouple character images into style representation and content representation, which offers fine-grained control of these two types of variables, thus improving the quality of the generated results. To effectively capture the style information, a style consistency module (SCM) is introduced. Technically, SCM exploits category-guided Kullback-Leibler divergence to explicitly model the style representation into different prior distributions. In this way, our model is capable of implementing transformations between multiple domains in one framework. In addition, we propose content prior module (CPM) to provide content prior for the model to guide the content encoding process and alleviates the problem of stroke deficiency during structure de-stylization. Benefiting from the idea of decoupling and regrouping, our FontGAN suffices to achieve many-to-many translation tasks for glyph structure. Experimental results demonstrate that the proposed FontGAN achieves the state-of-the-art performance in character glyph synthesis.
更多
查看译文
关键词
Task analysis, Generative adversarial networks, Gallium nitride, Topology, Standards, Electronic mail, Decoding, Character glyph synthesis, decoupled representation, generative adversarial networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要