Dimensionality reduction: An interpretation from manifold regularization perspective

Information Sciences(2014)

引用 16|浏览43
暂无评分
摘要
In this paper, we propose to unify various dimensionality reduction algorithms by interpreting the Manifold Regularization (MR) framework in a new way. Although the MR framework was originally proposed for learning, we utilize it to give a unified treatment for many dimensionality reduction algorithms from linear to nonlinear, supervised to unsupervised, and single class to multi-class approaches. In addition, the framework can provide a general platform to design new dimensionality reduction algorithms. The framework is expressed in the form of a regularized fitting problem in a Reproducing Kernel Hilbert Space. It consists of one error part and two regularization terms: the complexity term and the smoothness term. The error part measures the difference between the estimated (low-dimensional) data distribution and the true (high-dimensional) data distribution or the difference between the estimated and targeted low-dimensional representations of data, the complexity term is a measurement of the complexity of the feature mapping for dimensionality reduction, and the smoothness term reflects the intrinsic structure of data. Based on the framework, we propose a Manifold Regularized Kernel Least Squares (MR-KLS) method which can efficiently learn an explicit feature mapping (in the semi-supervised sense). Experiments show that our approach is effective for out-of-sample extrapolation.
更多
查看译文
关键词
Dimensionality reduction,Manifold regularization,Feature mapping,Manifold learning,Out-of-sample extrapolation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要