Laplacian Welsch Regularization For Robust Semi-Supervised Dictionary Learning

INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING: BIG DATA AND MACHINE LEARNING, PT II(2019)

引用 0|浏览11
暂无评分
摘要
Semi-supervised dictionary learning aims to find a suitable dictionary by utilizing limited labeled examples and massive unlabeled examples, so that any input can be sparsely reconstructed by the atoms in a proper way. However, existing algorithms will suffer from large reconstruction error due to the presence of outliers. To enhance the robustness of existing methods, this paper introduces an upper-bounded, smooth and nonconvex Welsch loss which is able to constrain the adverse effect brought by outliers. Besides, we adopt the Laplacian regularizer to enforce similar examples to share similar reconstruction coefficients. By combining Laplacian regularizer and Welsch loss into a unified framework, we propose a novel semi-supervised dictionary learning algorithm termed "Laplacian Welsch Regularization" (LWR). To handle the model non-convexity caused by the Welsch loss, we adopt Half-Quadratic (HQ) optimization algorithm to solve the model efficiently. Experimental results on various real-world datasets show that LWR performs robustly to outliers and achieves the top-level results when compared with the existing algorithms.
更多
查看译文
关键词
Semi-supervised dictionary learning, Welsch loss, Half-Quadratic optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要