Dynamic energy-accuracy trade-off using stochastic computing in deep neural networks

2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC)(2016)

引用 113|浏览0
暂无评分
摘要
This paper presents an efficient DNN design with stochastic computing. Observing that directly adopting stochastic computing to DNN has some challenges including random error fluctuation, range limitation, and overhead in accumulation, we address these problems by removing near-zero weights, applying weight-scaling, and integrating the activation function with the accumulator. The approach allows an easy implementation of early decision termination with a fixed hardware design by exploiting the progressive precision characteristics of stochastic computing, which was not easy with existing approaches. Experimental results show that our approach outperforms the conventional binary logic in terms of gate area, latency, and power consumption.
更多
查看译文
关键词
Deep Learning,Deep Neural Networks,Stochastic Computing,Energy Efficiency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要