TinyADC - Peripheral Circuit-aware Weight Pruning Framework for Mixed-signal DNN Accelerators.

DATE(2021)

引用 14|浏览20
暂无评分
摘要
As the number of weight parameters in deep neural networks (DNNs) continues growing, the demand for ultra-efficient DNN accelerators has motivated research on non-traditional architectures with emerging technologies. Resistive Random-Access Memory (ReRAM) crossbar has been utilized to perform in-situ matrix-vector multiplication of DNNs. DNN weight pruning techniques have also been applied to ReRAM-based mixed-signal DNN accelerators, focusing on reducing weight storage and accelerating computation. However, the existing works capture very few peripheral circuits features such as Analog to Digital converters (ADCs) during the neural network design. Unfortunately, ADCs have become the main part of power consumption and area cost of current mixed-signal accelerators, and the large overhead of these peripheral circuits is not solved efficiently. To address this problem, we propose a novel weight pruning framework for ReRAM-based mixed-signal DNN accelerators, named TINYADC, which effectively reduces the required bits for ADC resolution and hence the overall area and power consumption of the accelerator without introducing any computational inaccuracy. Compared to state-of-the-art pruning work on the ImageNet dataset, TINYADC achieves 3.5x and 2.9x power and area reduction, respectively. TINYADC framework optimizes the throughput of state-of-the-art architecture design by 29% and 40% in terms of the throughput per unit of millimeter square and watt (GOPs/sxmm(2) and GOPs/w), respectively.
更多
查看译文
关键词
ReRAM-based mixed-signal DNN accelerators,peripheral circuits,neural network design,power consumption,mixed-signal accelerators,TinyADC,peripheral circuit-aware weight pruning framework,deep neural networks,ultra-efficient DNN accelerators,DNN weight pruning techniques,resistive random-access memory crossbar
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要