Global optimization of factor models using alternating minimization.

arXiv: Machine Learning(2016)

引用 24|浏览22
暂无评分
摘要
Learning new representations in machine learning is often tackled using a factorization of the data. For many such problems, including sparse coding and matrix completion, learning these factorizations can be difficult, in terms of efficiency and to guarantee that the solution is a global minimum. Recently, a general class of objectives have been introduced, called induced regularized factor models (RFMs), which have an induced convex form that enables global optimization. Though attractive theoretically, this induced form is impractical, particularly for large or growing datasets. In this work, we investigate the use of a practical alternating minimization algorithms for induced RFMs, that ensure convergence to global optima. We characterize the stationary points of these models, and, using these insights, highlight practical choices for the objectives. We then provide theoretical and empirical evidence that alternating minimization, from a random initialization, converges to global minima for a large subclass of induced RFMs. In particular, we prove that induced RFMs do not have degenerate saddlepoints and that local minima are actually global minima. Finally, we provide an extensive investigation into practical optimization choices for using alternating minimization for induced RFMs, for both batch and stochastic gradient descent.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要