Sparse-coded net model and applications

2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP)(2016)

引用 0|浏览79
暂无评分
摘要
As an unsupervised learning method, sparse coding can discover high-level representations for an input in a large variety of learning problems. Under semi-supervised settings, sparse coding is used to extract features for a supervised task such as classification. While sparse representations learned from unlabeled data independently of the supervised task perform well, we argue that sparse coding should also be built as a holistic learning unit optimizing on the supervised task objectives more explicitly. In this paper, we propose sparse-coded net, a feedforward model that integrates sparse coding and task-driven output layers, and describe training methods in detail. After pretraining a sparse-coded net via semi-supervised learning, we optimize its task-specific performance in a novel backpropagation algorithm that can traverse nonlinear feature pooling operators to update the dictionary. Thus, sparse-coded net can be applied to supervised dictionary learning. We evaluate sparse-coded net with classification problems in sound, image, and text data. The results confirm a significant improvement over semi-supervised learning as well as superior classification performance against deep stacked autoencoder neural network and GMM-SVM pipelines in small to medium-scale settings.
更多
查看译文
关键词
sparse-coded net model,unsupervised learning,high-level representations,feature extraction,sparse representations,holistic learning unit,feedforward model,task-driven output layers,semisupervised learning,backpropagation algorithm,nonlinear feature pooling operators,supervised dictionary learning,deep stacked autoencoder neural network,GMM-SVM pipelines
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要