Hyprogan: Breaking the Dimensional wall From Human to Anime.

ICIP(2022)

引用 1|浏览10
暂无评分
摘要
Image translation from human faces to anime ones brings a low-end, efficient way to create animation characters for animation industry. However, due to the significant inter-domain difference between anime images and human photos, existing image-to-image translation approaches cannot address this task well. To solve this dilemma, we propose HyProGAN, an exemplar-guided image-to-image translation model without paired data. The key contribution of HyProGAN is that it introduces a novel hybrid and progressive training strategy that expands the unidirectional translation between two domains into the bidirectional intra-domain and inter-domain translation. To enhance the consistency between input and output, we further propose a local masking loss to align the facial features between the human face and the generated anime face. Extensive experiments demonstrate the superiority of HyProGAN against state-of-the-art models.
更多
查看译文
关键词
generative adversarial network, image-to-image translation, anime face
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要