Dissipative Gradient Descent Ascent Method: A Control Theory Inspired Algorithm for Min-max Optimization
CoRR(2024)
摘要
Gradient Descent Ascent (GDA) methods for min-max optimization problems
typically produce oscillatory behavior that can lead to instability, e.g., in
bilinear settings. To address this problem, we introduce a dissipation term
into the GDA updates to dampen these oscillations. The proposed Dissipative GDA
(DGDA) method can be seen as performing standard GDA on a state-augmented and
regularized saddle function that does not strictly introduce additional
convexity/concavity. We theoretically show the linear convergence of DGDA in
the bilinear and strongly convex-strongly concave settings and assess its
performance by comparing DGDA with other methods such as GDA, Extra-Gradient
(EG), and Optimistic GDA. Our findings demonstrate that DGDA surpasses these
methods, achieving superior convergence rates. We support our claims with two
numerical examples that showcase DGDA's effectiveness in solving saddle point
problems.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要