Optimizing Reconfigurable Recurrent Neural Networks
2020 IEEE 28th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)(2020)
摘要
This paper proposes a novel latency-hiding hardware architecture based on column-wise matrix-vector multiplication to eliminate data dependency, improving the throughput of systems of RNN models. In addition, a flexible checkerboard tiling strategy is introduced to allow large weight matrices, while supporting element-based parallelism and vector-based parallelism. These optimizations improve the exploitation of the available parallelism to increase run-time hardware utilization and boost inference throughput. Furthermore, a quantization scheme with fine-tuning is proposed to achieve high accuracy. Evaluation results show that the proposed architecture can enhance performance and energy efficiency with little accuracy loss. It achieves 1.05 to 3.35 times better performance and 1.22 to 3.92 times better hardware utilization than a state-of-theart FPGA-based LSTM design, which shows that our approach contributes to high performance FPGA-based LSTM systems.
更多查看译文
关键词
reconfigurable recurrent neural networks,latency-hiding hardware architecture,column-wise matrix-vector multiplication,data dependency,RNN models,weight matrices,element-based parallelism,vector-based parallelism,optimizations,run-time hardware utilization,boost inference throughput,energy efficiency,state-of-theart FPGA-based LSTM design,high performance FPGA-based LSTM systems,flexible checkerboard tiling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络