EACNN: Efficient CNN Accelerator Utilizing Linear Approximation and Computation Reuse.

ISCAS(2023)

引用 1|浏览4
暂无评分
摘要
This paper proposes an efficient hardware accelerator named EACNN for use in Convolution Neural Networks. EACNN is an efficient CNN architecture that is based on co-optimization of algorithms and hardware. The proposed approach is based on linear approximation of the weights for pre-trained networks with low loss of accuracy. Furthermore, a weight substitution and remapping technique adopts linear approximation coefficients to replace CNN weights. That leads to a repetition of the weight values across different kernels and enables the reuse of CNN computations for various output feature maps. The input activations corresponding to the same linear coefficient can be multiplied and accumulated first and then reused to generate multiple output feature maps. This computational reuse method reduces the number of multiplication and addition operations and memory accesses, which is efficiently supported by a dedicated element in the proposed EACNN. Experimental results on CIFAR 10 and CIFAR 100 datasets show that the proposed method eliminates around 61% of the multiplications in the network without significant loss of accuracy (< 3%). As a demonstration, a hardware accelerator based on EACNN was implemented on Xilinx FPGA Artix 7 and achieved a 50% reduction in the FPGA hardware resources.
更多
查看译文
关键词
Deep neural network,Hardware acceleration,computational reuse,approximate computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要