Pca Using Graph Total Variation

2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS(2016)

引用 28|浏览29
暂无评分
摘要
Mining useful clusters from high dimensional data has received significant attention of the signal processing and machine learning community in the recent years. Linear and non-linear dimensionality reduction has played an important role to overcome the curse of dimensionality. However, often such methods are accompanied with problems such as high computational complexity (usually associated with the nuclear norm minimization), non-convexity (for matrix factorization methods) or susceptibility to gross corruptions in the data. In this paper we propose a convex, robust, scalable and efficient Principal Component Analysis (PCA) based method to approximate the low-rank representation of high dimensional datasets via a two-way graph regularization scheme. Compared to the exact recovery methods, our method is approximate, in that it enforces a piecewise constant assumption on the samples using a graph total variation and a piecewise smoothness assumption on the features using a graph Tikhonov regularization. Futhermore, it retrieves the low-rank representation in a time that is linear in the number of data samples. Clustering experiments on 3 benchmark datasets with different types of corruptions show that our proposed model outperforms 7 state-of-the-art dimensionality reduction models.
更多
查看译文
关键词
PCA,graph total variation,low-rank feature extraction,clustering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要