A survey of randomized algorithms for training neural networks.

Inf. Sci.(2016)

引用 304|浏览132
暂无评分
摘要
As a powerful tool for data regression and classification, neural networks have received considerable attention from researchers in fields such as machine learning, statistics, computer vision and so on. There exists a large body of research work on network training, among which most of them tune the parameters iteratively. Such methods often suffer from local minima and slow convergence. It has been shown that randomization based training methods can significantly boost the performance or efficiency of neural networks. Among these methods, most approaches use randomization either to change the data distributions, and/or to fix a part of the parameters or network configurations.źThis article presents a comprehensive survey of the earliest work and recent advances as well as some suggestions for future research.
更多
查看译文
关键词
Randomized neural networks,Recurrent neural networks,Convolutional neural networks,Deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要