Energy-efficient, high-performance, highly-compressed deep neural network design using block-circulant matrices.

ICCAD(2017)

引用 25|浏览31
暂无评分
摘要
Deep neural networks (DNNs) have emerged as the most powerful machine learning technique in numerous artificial intelligent applications. However, the large sizes of DNNs make themselves both computation and memory intensive, thereby limiting the hardware performance of dedicated DNN accelerators. In this paper, we propose a holistic framework for energy-efficient high-performance highly-compressed DNN hardware design. First, we propose block-circulant matrix-based DNN training and inference schemes, which theoretically guarantee Big-O complexity reduction in both computational cost (from O(n2) to O(n log n)) and storage requirement (from O(n2) to O(n)) of DNNs. Second, we dedicatedly optimize the hardware architecture, especially on the key fast Fourier transform (FFT) module, to improve the overall performance in terms of energy efficiency, computation performance and resource cost. Third, we propose a design flow to perform hardware-software co-optimization with the purpose of achieving good balance between test accuracy and hardware performance of DNNs. Based on the proposed design flow, two block-circulant matrix-based DNNs on two different datasets are implemented and evaluated on FPGA. The fixed-point quantization and the proposed block-circulant matrix-based inference scheme enables the network to achieve as high as 3.5 TOPS computation performance and 3.69 TOPS/W energy efficiency while the memory is saved by 108X ∼ 116X with negligible accuracy degradation.
更多
查看译文
关键词
energy-efficient high-performance DNN hardware design,artificial intelligent applications,block-circulant matrix-based inference scheme,fast Fourier transform,Big-O complexity reduction,fixed-point quantization,hardware-software co-optimization,design flow,computation performance,energy efficiency,hardware architecture,computational cost,inference schemes,block-circulant matrix,dedicated DNN accelerators,hardware performance,powerful machine learning technique,DNNs,deep neural networks,block-circulant matrices,highly-compressed deep neural network design
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要