From Affine Rank Minimization Solution to Sparse Modeling

2017 IEEE Winter Conference on Applications of Computer Vision (WACV)(2017)

引用 2|浏览21
暂无评分
摘要
Compressed sensing is a simple and efficient technique that has a number of applications in signal processing and machine learning. In machine learning it provides answers to questions such as: "under what conditions is the sparse representation of data efficient?", "when is learning a large margin classifier directly on the compressed domain possible?", and "why does a large margin classifier learn more effectively if the data is sparse?". This work tackles the problem of feature representation from the context of sparsity and affine rank minimization by leveraging compressed sensing from the learning perspective in order to provide answers to the aforementioned questions. We show, for a full-rank signal, the high dimensional sparse representation of data is efficient because from the classifiers viewpoint such a representation is in fact a low dimensional problem. We provide practical bounds on the linear classifier to investigate the relationship between the SVM classifier in the high dimensional and compressed domains and show for the high dimensional sparse signals, when the bounds are tight, directly learning in the compressed domain is possible.
更多
查看译文
关键词
affine rank minimization solution,sparse modeling,feature representation,compressed sensing,high dimensional sparse representation,linear classifier,SVM classifier
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要