Unsupervised Pre-Training Speeds up the Search for Good Features : An Analysis of a Simplified Model of Neural Network Learning

semanticscholar(2014)

引用 0|浏览3
暂无评分
摘要
While unsupervised pre-training has been shown empirically to improve supervised learning of neural networks, it is still not well understood how it does this. We suggest that unsupervised pre-training helps supervised learning by speeding up the search for good features. We show that for highly structured input distributions, unsupervised pre-training can allow supervised learning algorithms to achieve higher accuracy by allowing them to search through a larger number of features in a given amount of time. The ability to search through more features can improve the accuracy of the learned predictor by allowing the learner to lower its bias and possibly also its variance. In particular, for a fixed k and for the set of features consisting of all k-conjunctions, we show theoretically and empirically that changing to a new representation based only on unsupervised data allows us to search through many more features in a given amount of time. We show that this in turn lowers the bias of the learner. However, we also show theoretically and empirically that this decrease in bias often comes at the cost of a large increase in variance. We are currently working to address this issue of increased variance.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要