Fast learning algorithm to improve performance of Quickprop

Electronics Letters(2012)

引用 1|浏览10
暂无评分
摘要
Quickprop is one of the most popular fast learning algorithms in training feed-forward neural networks. Its learning rate is fast; however, it is still limited by the gradient of the backpropagation algorithm and it is easily trapped into a local minimum. Proposed is a new fast learning algorithm to overcome these two drawbacks. The performance investigation in different learning problems (applications) shows that the new algorithm always converges with a faster learning rate compared with Quickprop and other fast learning algorithms. The improvement in global convergence capability is especially large, which increased from 4 to 100% in one learning problem.
更多
查看译文
关键词
backpropagation,convergence,feedforward neural nets,gradient methods,minimisation,backpropagation algorithm gradient,fast learning algorithm,feedforward neural network training,global convergence,learning rate,local minimum,quickprop performance improvement
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要