Recurrent Neural Networks With Column-Wise Matrix–Vector Multiplication on FPGAs

IEEE Transactions on Very Large Scale Integration (VLSI) Systems(2022)

引用 10|浏览51
暂无评分
摘要
This article presents a reconfigurable accelerator for REcurrent Neural networks with fine-grained cOlumn-Wise matrix–vector multiplicatioN (RENOWN). We propose a novel latency-hiding architecture for recurrent neural network (RNN) acceleration using column-wise matrix–vector multiplication (MVM) instead of the state-of-the-art row-wise operation. This hardware (HW) architecture can eliminate data dependencies to improve the throughput of RNN inference systems. Besides, we introduce a configurable checkerboard tiling strategy which allows large weight matrices, while incorporating various configurations of element-based parallelism (EP) and vector-based parallelism (VP). These optimizations improve the exploitation of parallelism to increase HW utilization and enhance system throughput. Evaluation results show that our design can achieve over 29.6 tera operations per second (TOPS) which would be among the highest for field-programmable gate array (FPGA)-based RNN designs. Compared to state-of-the-art accelerators on FPGAs, our design achieves 3.7–14.8 times better performance and has the highest HW utilization.
更多
查看译文
关键词
Hardware Accelerator,long short-term (LSTM),recurrent neural network (RNN)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要