Evaluating Data Resilience in CNNs from an Approximate Memory Perspective.

ACM Great Lakes Symposium on VLSI(2017)

引用 9|浏览47
暂无评分
摘要
Due to the large volumes of data that need to be processed, efficient memory access and data transmission are crucial for high-performance implementations of convolutional neural networks (CNNs). Approximate memory is a promising technique to achieve efficient memory access and data transmission in CNN hardware implementations. To assess the feasibility of applying approximate memory techniques, we propose a framework for the data resilience evaluation (DRE) of CNNs and verify its effectiveness on a suite of prevalent CNNs. Simulation results show that a high degree of data resilience exists in these networks. By scaling the bit-width of the first five dominant data subsets, the data volume can be reduced by 80.38% on average with a 2.69% loss in relative prediction accuracy. For approximate memory with random errors, all the synaptic weights can be stored in the approximate part when the error rate is less than 10--4, while 3 MSBs must be protected if the error rate is fixed at 10--3. These results indicate a great potential for exploiting approximate memory techniques in CNN hardware design.
更多
查看译文
关键词
Data Resilience Evaluation, Convolutional Neural Network, Approximate Memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要