WeChat Mini Program
Old Version Features

Local Convergence of Gradient Descent-Ascent for Training Generative Adversarial Networks

FIFTY-SEVENTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, IEEECONF(2023)

Cited 1|Views18
Key words
Generative Adversarial Networks,Local Convergence,Generative Adversarial Networks Training,Dynamical,Learning Rate,Phase Transition,Convergence Rate,Nonlinear Systems,Minimax Optimization,Loss Function,Eigenvalues,Step Size,Kernel Function,Postural Stability,Equilibrium Point,Kinetic Rate,Hyperparameter Tuning,Point-like,Linear Term,Dirac Delta,Kernel Width,Model Hyperparameters,Spectral Radius,Maximum Mean Discrepancy,Smooth Measure,Small Learning Rate,Training Analysis
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined