Latent Subspace Representation For Multiclass Classification

PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I(2018)

引用 0|浏览60
暂无评分
摘要
Self-representation based subspace representation has shown its effectiveness in clustering tasks, in which the key assumption is that data are from multiple subspaces and can be reconstructed by the data themselves. Benefiting from the self-representation manner, ideally, subspace representation matrix will be block-diagonal. The block-diagonal structure indicates the true segmentation of data, which is beneficial to the multiclass classification task. In this paper, we propose a Latent Subspace Representation for Multiclass Classification (LSRMC). With the help of a projection, our method focuses on exploiting the subspace representation based on the low-dimensional latent subspace, which further ensures the quality of subspace representation. We learn the projection, subspace representation and classifier in a unified model, and solve the problem efficiently by using Augmented Lagrangian Multiplier with Alternating Direction Minimization. Experiments on benchmark datasets demonstrate that our approach outperforms the state-of-the-art multiclass classification methods.
更多
查看译文
关键词
Subspace representation, Latent space, Multiclass classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要