Backpropagation Parallel Algorithm

msra

引用 23|浏览2
暂无评分
摘要
The supervised training of feedforward neural networks is often based on the error backpropagation algorithm. Our main purpose is to consider the successive layers of a feedforward neural network as the stages of a pipeline which is used to improve the efficiency of the parallel algorithm. A simple placement rule will be presented in order to take advantage of simultaneous executions of the calculations on each layer of the network. The analytic expressions show that the parallelization is efficient. Moreover, they indicate that the performances of this implementation are almost independent of the neural network architecture. Their simplicity assures easy prediction of learning performance on a parallel machine for any neural network architecture. The experimental results are in agreement with analytical estimates. The applications of artificial neural networks will be fully developed when the massive parallelism of the architecture is exploited in dedicated electronic microcircuits or in simulators which are based on general purpose parallel computers. The learning process of the backpropagation algorithm requires much time and needs high-performance machines for industrial applications. The neural network architecture from which this algorithm operates can possess numerous neurons and synaptic connections. It uses a training set which contains numerous examples. High speed computation can be achieved by partitioning the set of data and by making simultaneous runs on each subset. Thus, a learning algorithm can be parallelized in two ways — by the partitioning of the neural network or by the partitioning of the training set.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要