Efficient Recovery of Low-Rank Matrix via Double Nonconvex Nonsmooth Rank Minimization.

IEEE transactions on neural networks and learning systems(2019)

引用 36|浏览66
暂无评分
摘要
Recently, there is a rapidly increasing attraction for the efficient recovery of low-rank matrix in computer vision and machine learning. The popular convex solution of rank minimization is nuclear norm-based minimization (NNM), which usually leads to a biased solution since NNM tends to overshrink the rank components and treats each rank component equally. To address this issue, some nonconvex nonsmooth rank (NNR) relaxations have been exploited widely. Different from these convex and nonconvex rank substitutes, this paper first introduces a general and flexible rank relaxation function named weighted NNR relaxation function, which is actually derived from the initial double NNR (DNNR) relaxations, i.e., DNNR relaxation function acts on the nonconvex singular values function (SVF). An iteratively reweighted SVF optimization algorithm with continuation technology through computing the supergradient values to define the weighting vector is devised to solve the DNNR minimization problem, and the closed-form solution of the subproblem can be efficiently obtained by a general proximal operator, in which each element of the desired weighting vector usually satisfies the nondecreasing order. We next prove that the objective function values decrease monotonically, and any limit point of the generated subsequence is a critical point. Combining the Kurdyka-Łojasiewicz property with some milder assumptions, we further give its global convergence guarantee. As an application in the matrix completion problem, experimental results on both synthetic data and real-world data can show that our methods are competitive with several state-of-the-art convex and nonconvex matrix completion methods.
更多
查看译文
关键词
Minimization,Convergence,Artificial neural networks,Optimization,Matrix decomposition,Learning systems,Computer vision
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要