Contrastive Divergence Learning May Diverge When Training Restricted Boltzmann Machines

Frontiers in Computational Neuroscience(2009)

引用 2|浏览9
暂无评分
摘要
Understanding and modeling how brains learn higher-level representations from sensory input is one of the key challenges in computational neuroscience and machine learning. Layered generative models such as deep belief networks (DBNs) are promising for unsupervised learning such representations, and new algorithms that operate in a layer-wise fashion make learning these models computationally tractable [1-5]. Restricted Boltzmann Machines (RBMs) are the typical building blocks for DBN layers. They are undirected graphical models, and their structure is a bipartite graph connecting input (visible) and hidden neurons. Training large undirected graphical models by likelihood maximization in general involves averages over an exponential number of terms, and obtaining unbiased estimates of these averages by Markov chain Monte Carlo methods typically requires many sampling steps. However …
更多
查看译文
关键词
boltzmann machine
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要