Exploring the Granularity of Sparsity in Convolutional Neural Networks.

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops(2017)

引用 311|浏览79
暂无评分
摘要
Sparsity helps reducing the computation complexity of DNNs by skipping the multiplication with zeros. The granularity of sparsity affects the efficiency of hardware architecture and the prediction accuracy. In this paper we quantitatively measure the accuracy-sparsity relationship with different granularity. Coarse-grained sparsity brings more regular sparsity pattern, making it easier for hardware acceleration, and our experimental results show that coarse-grained sparsity have very small impact on the sparsity ratio given no loss of accuracy. Moreover, due to the index saving effect, coarse-grained sparsity is able to obtain similar or even better compression rates than fine-grained sparsity at the same accuracy threshold. Our analysis, which is based on the framework of a recent sparse convolutional neural network (SCNN) accelerator, further demonstrates that it saves 30% - 35% of memory references compared with fine-grained sparsity.
更多
查看译文
关键词
convolutional neural networks,computation complexity,accuracy-sparsity relationship,sparsity granularity,sparsity pattern,hardware acceleration,index saving effect,coarse-grained sparsity,compression rates,fine-grained sparsity,sparse convolutional neural network accelerator,SCNN accelerator,memory references
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要