Parallel Training of Recurrent Neural Networks

semanticscholar(2016)

引用 0|浏览4
暂无评分
摘要
We accelerated recurrent neural network (RNN) training on multi-core CPUs using Halide. Our implementation used data parallelism and parallel matrix multiplication operations (Multi-threads + SIMD Vectorization), and achieved approx. 39x speedup against sequential training of the RNN in Halide on a 61 core Xeon Phi CPU. We also compared our optimized Halide implementation with a optimized NumPy implementation of the same algorithm with batching on the Xeon Phi, and report our results.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要