Segmentation Guided Image-to-Image Translation with Adversarial Networks

2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019)(2019)

引用 22|浏览190
暂无评分
摘要
Recently image-to-image translation has received increasing attention, which aims to map images in one domain to another specific one. Existing methods mainly solve this task via a deep generative model, and focus on exploring the relationship between different domains. However, these methods neglect to utilize higher-level and instance-specific information to guide the training process, leading to a great deal of unrealistic generated images of low quality. Existing methods also lack of spatial controllability during translation. To address these challenge, we propose a novel Segmentation Guided Generative Adversarial Networks (SGGAN), which leverages semantic segmentation to further boost the generation performance and provide spatial mapping. In particular, a segmentor network is designed to impose semantic information on the generated images. Experimental results on multi-domain face image translation task empirically demonstrate our ability of the spatial modification and our superiority in image quality over several state-of-the-art methods.
更多
查看译文
关键词
image quality,Segmentation guided image-to-image translation,map images,deep generative model,instance-specific information,unrealistic generated images,novel Segmentation Guided Generative Adversarial Networks,semantic segmentation,generation performance,spatial mapping,segmentor network,multidomain face image translation task
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要