Continuous Neural Networks

AISTATS(2007)

引用 64|浏览35
暂无评分
摘要
This article extends neural networks to the case of an uncountable number of hidden units, in several ways. In the rst approach proposed, a nite parametrization is possi- ble, allowing gradient-based learning. While having the same number of parameters as an ordinary neural network, its internal struc- ture suggests that it can represent some smooth functions much more compactly. Un- der mild assumptions, we also nd better er- ror bounds than with ordinary neural net- works. Furthermore, this parametrization may help reducing the problem of satura- tion of the neurons. In a second approach, the input-to-hidden weights are fully non- parametric, yielding a kernel machine for which we demonstrate a simple kernel for- mula. Interestingly, the resulting kernel ma- chine can be made hyperparameter-free and still generalizes in spite of an absence of ex- plicit regularization.
更多
查看译文
关键词
neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要