A Hybrid Learning Algorithm For The Optimization Of Convolutional Neural Network

INTELLIGENT COMPUTING METHODOLOGIES, ICIC 2017, PT III(2017)

引用 4|浏览61
暂无评分
摘要
The stochastic gradient descend (SGD) is a prevalence algorithm used to optimize Convolutional Neural Network (CNN) by many researchers. However, it has several disadvantages such as occurring in local optimum and vanishing gradient problems that need to be overcome or optimized. In this paper, we propose a hybrid learning algorithm which aims to tackle the above mentioned drawbacks by integrating the methods of particle swarm optimization (PSO) and SGD. To take advantage of the excellent global search capability of PSO, we introduce the velocity update formula which is combined with the gradient descend to overcome the shortcomings. In addition, due to the cooperation of the particles, the proposed algorithm helps the convolutional neural network dampen overfitting and obtain better results. The German traffic sign recognition (GTSRB) benchmark is employed as dataset to evaluate the performance and experimental results demonstrate that proposed method outperforms the standard SGD and conjugate gradient (CG) based approaches.
更多
查看译文
关键词
Particle swarm optimization, Convolutional neural network, Hybrid learning algorithm, Stochastic gradient descent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要