Generative Modeling using the Sliced Wasserstein Distance

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition(2018)

引用 218|浏览19
暂无评分
摘要
Generative Adversarial Nets (GANs) are very successful at modeling distributions from given samples, even in the high-dimensional case. However, their formulation is also known to be hard to optimize and often not stable. While this is particularly true for early GAN formulations, there has been significant empirically motivated and theoretically founded progress to improve stability, for instance, by using the Wasserstein distance rather than the Jenson-Shannon divergence. Here, we consider an alternative formulation for generative modeling based on random projections which, in its simplest form, results in a single objective rather than a saddle-point formulation. By augmenting this approach with a discriminator we improve its accuracy. We found our approach to be significantly more stable compared to even the improved Wasserstein GAN. Further, unlike the traditional GAN loss, the loss formulated in our method is a good measure of the actual distance between the distributions and, for the first time for GAN training, we are able to show estimates for the same.
更多
查看译文
关键词
generative modeling,sliced Wasserstein distance,Generative Adversarial Nets,high-dimensional case,Jenson-Shannondivergence,alternative formulation,saddle-point formulation,traditional GAN loss,actual distance,GAN training,GAN,GAN formulations,Wasserstein GAN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要