Quasi-newton method for L P multiple kernel learning

Neurocomputing(2016)

引用 23|浏览14
暂无评分
摘要
Multiple kernel learning method has more advantages over the single one on the model’s interpretability and generalization performance. The existing multiple kernel learning methods usually solve SVM in the dual which is equivalent to the primal optimization. Research shows solving in the primal achieves faster convergence rate than solving in the dual. This paper provides a novel LP-norm(P>1) constraint non-spare multiple kernel learning method which optimizes the objective function in the primal. Subgradient and Quasi-Newton approach are used to solve standard SVM which possesses superlinear convergence property and acquires inverse Hessian without computing a second derivative, leading to a preferable convergence speed. Alternating optimization method is used to solve SVM and to learn the base kernel weights. Experiments show that the proposed algorithm converges rapidly and that its efficiency compares favorably to other multiple kernel learning algorithms.
更多
查看译文
关键词
Multiple kernel learning,Quasi-Newton method,Alternating optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要