GAT--GMM: Generative Adversarial Training for Gaussian Mixture Models

arxiv(2023)

引用 1|浏览42
暂无评分
摘要
Generative adversarial networks (GANs) learn the distribution of observed samples through a zero -sum game between two machine players, a generator and a discriminator. While GANs achieve great success in learning the complex distribution of image, sound, and text data, they perform suboptimally in learning multimodal distribution-learning benchmarks such as Gaussian mixture models (GMMs). In this paper, we propose Generative Adversarial Training for Gaussian Mixture Models (GAT-GMM), a minimax GAN framework for learning GMMs. Motivated by optimal trans-port theory, we design the zero-sum game in GAT-GMM using a random linear generator and a softmax-based quadratic discriminator architecture, which leads to a nonconvex concave minimax optimization problem. We show that a gradient descent ascent (GDA) method converges to an approximate stationary minimax point of the GAT-GMM optimization problem. In the benchmark case of a mixture of two symmetric, well-separated Gaussians, we further show that this station-ary point recovers the true parameters of the underlying GMM. We discuss the application of the proposed GAT-GMM framework for learning GMMs in the distributed federated learning setting, where the widely used expectation-maximization (EM) algorithm can incur great computational and communication costs. On the other hand, we show that GAT-GMM provides a scalable learning approach and a distributed GDA algorithm can still solve the GAT-GMM minimax problem without incurring extra computation costs. We numerically support our theoretical results by performing experiments which show that our minimax framework is successful in centralized learning tasks and can outperform standard EM-type algorithms in the federated setting.
更多
查看译文
关键词
minimax learning,Gaussian mixture models,optimal transport,generative adversarial networks,federated learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要