Extending the Reach of First-Order Algorithms for Nonconvex Min-Max Problems with Cohypomonotonicity

CoRR(2024)

引用 0|浏览3
暂无评分
摘要
We focus on constrained, L-smooth, nonconvex-nonconcave min-max problems either satisfying ρ-cohypomonotonicity or admitting a solution to the ρ-weakly Minty Variational Inequality (MVI), where larger values of the parameter ρ>0 correspond to a greater degree of nonconvexity. These problem classes include examples in two player reinforcement learning, interaction dominant min-max problems, and certain synthetic test problems on which classical min-max algorithms fail. It has been conjectured that first-order methods can tolerate value of ρ no larger than 1/L, but existing results in the literature have stagnated at the tighter requirement ρ < 1/2L. With a simple argument, we obtain optimal or best-known complexity guarantees with cohypomonotonicity or weak MVI conditions for ρ < 1/L. The algorithms we analyze are inexact variants of Halpern and Krasnosel'skiĭ-Mann (KM) iterations. We also provide algorithms and complexity guarantees in the stochastic case with the same range on ρ. Our main insight for the improvements in the convergence analyses is to harness the recently proposed "conic nonexpansiveness" property of operators. As byproducts, we provide a refined analysis for inexact Halpern iteration and propose a stochastic KM iteration with a multilevel Monte Carlo estimator.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要