Scalable Systolic Array Multiplier Optimized by Sparse Matrix.

RiMing Jia,Tu Xu,Yuchun Chang

ASICON(2021)

引用 0|浏览0
暂无评分
摘要
Various artificial intelligence (AI) algorithms are proposed in recent years, the demand for computing complexity has also increased. Matrix multiplication is a computing unit commonly used in AI calculations. This paper proposes a novel sparse matrix multiplication optimized scalable systolic array multiplier, which has three characteristics compared to traditional matrix multipliers: 1. This paper deals with a sparse matrix multiplication optimization multiplier, which is achieved by the critical path. 2. This paper focuses on high-dimensional convolutional neural network computation and proposed a multiplier with an expandable feature, which can calculate the matrices that dimension below the multiplier itself. Compared with the inextensible multiplier, this improvement reduces power consumption when calculating low-dimensional data. 3. The matrix multiplier of the traditional systolic array has a pulsation relationship between columns, which introduced delay registers between adjacent columns. Therefore, we proposed a new structure based on the traditional systolic array which removes the delay module. We designed a 4*4 matrix multiplication and deployed it at the PYNQ-Z7020 field-programmable gate array (FPGA). The result shows that the proposed structure reduces the 9.2% calculation delay, saving 13.3% slice logic power consumption than the traditional multiplication.
更多
查看译文
关键词
scalable systolic array multiplier,sparse matrix
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要