Diversified Texture Synthesis with Feed-Forward Networks

2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2017)

引用 291|浏览104
暂无评分
摘要
Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization.
更多
查看译文
关键词
diversified texture synthesis,feed-forward networks,generative modeling,visually identical output,satisfying visual effects,improved texture synthesis,multiple textures,single network,feed-forward based methods trade
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要