FPGA-Based Accelerator for Rank-Enhanced and Highly-Pruned Block-Circulant Neural Networks

2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE(2023)

引用 0|浏览22
暂无评分
摘要
Numerous network compression methods have been proposed to deploy deep neural networks in a resource-constrained embedded system. Among them, block-circulant matrix (BCM) compression is one of the promising hardware-friendly methods for both acceleration and compression. However, it has several limitations; (i) limited representation due to the structural characteristic of circulant matrix, (ii) limitation of the compression parameter, (iii) need to specialize the dataflow for BCM-compressed network accelerators. In this paper, rank-enhanced and highly-pruned block-circulant matrices compression (RP-BCM) framework is proposed to overcome these limitations. RP-BCM comprises two stages: Hadamard-BCM and BCM-wise pruning. Moreover, a dedicated skip scheme is introduced to processing element design for exploiting high-parallelism with BCM-wise sparsity. Furthermore, we propose specialized dataflow for a BCM-compressed network on a resource-constrained FPGA. As a result, the proposed method achieves parameter reduction and FLOPs reduction for ResNet-50 in ImageNet by 92.4% and 77.3%, respectively. Moreover, the proposed hardware design achieves 3.1x improvement in energy efficiency on the Xilinx PYNQ-Z2 FPGA board for ResNet-18 on ImageNet compared to the GPU.
更多
查看译文
关键词
Network Compression,Structured Pruning,CNN Accelerator,FPGA,Convolution Neural Networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要