Manifold Learning Benefits GANs

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 10|浏览25
暂无评分
摘要
In this paper 1 1 Code: https://qithub.com/MaxwellYaoNi/LCSAGAN., we improve Generative Adversarial Net-works by incorporating a manifold learning step into the discriminator. We consider locality-constrained linear and subspace-based manifolds 2 2 The coding spaces considered in this paper are loosely termed man-ifolds. In most cases they are not manifolds in the strict mathematical sense, but rather topological spaces such as varieties, or simplicial com-plexes. The word will be used only in an informal sense., and locality-constrained non-linear manifolds. In our design, the manifold learning and coding steps are intertwined with layers of the discrimina-tor, with the goal of attracting intermediate feature repre-sentations onto manifolds. We adaptively balance the dis-crepancy between feature representations and their mani-fold view, which is a trade-off between denoising on the manifold and refining the manifold. We find that locality-constrained non-linear manifolds outperform linear mani-folds due to their non-uniform density and smoothness. We also substantially outperform state-of-the-art baselines.
更多
查看译文
关键词
Image and video synthesis and generation, Computer vision theory, Machine learning, Statistical methods
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络