A Quasi-Newton Subspace Trust Region Algorithm for nonmonotone variational inequalities in adversarial learning over box constraints
arxiv(2023)
摘要
The first-order optimality condition of convexly constrained nonconvex
nonconcave min-max optimization problems with box constraints formulates a
nonmonotone variational inequality (VI), which is equivalent to a system of
nonsmooth equations. In this paper, we propose a quasi-Newton subspace trust
region (QNSTR) algorithm for the least squares problems defined by the
smoothing approximation of nonsmooth equations. Based on the structure of the
nonmonotone VI, we use an adaptive quasi-Newton formula to approximate the
Hessian matrix and solve a low-dimensional strongly convex quadratic program
with ellipse constraints in a subspace at each step of the QNSTR algorithm
efficiently. We prove the global convergence of the QNSTR algorithm to an
ϵ-first-order stationary point of the min-max optimization problem.
Moreover, we present numerical results based on the QNSTR algorithm with
different subspaces for a mixed generative adversarial networks in eye image
segmentation using real data to show the efficiency and effectiveness of the
QNSTR algorithm for solving large-scale min-max optimization problems.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要