Robust unsupervised feature selection by nonnegative sparse subspace learning

Neurocomputing(2019)

引用 24|浏览20
暂无评分
摘要
Sparse subspace learning has been demonstrated to be effective in data mining and machine learning. In this paper, we cast the unsupervised feature selection scenario as a matrix factorization problem from the viewpoint of sparse subspace learning. By minimizing the reconstruction residual, the learned feature weight matrix with the l2,1-norm and the non-negative constraints not only removes the irrelevant features, but also captures the underlying low dimensional structure of the data points. Meanwhile in order to enhance the model's robustness, l1-norm error function is used to resistant to outliers and sparse noise. An efficient iterative algorithm is introduced to optimize this non-convex and non-smooth objective function and the proof of its convergence is given. Although, there is a subtraction item in our multiplicative update rule, we validate its non-negativity. The superiority of our model is demonstrated by comparative experiments on various original datasets with and without malicious pollution.
更多
查看译文
关键词
Subspace learning,Non-negative matrix factorization,Unsupervised feature selection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要