Approximate arithmetic aware training for stochastic computing neural networks

2023 38TH CONFERENCE ON DESIGN OF CIRCUITS AND INTEGRATED SYSTEMS, DCIS(2023)

引用 0|浏览1
暂无评分
摘要
Deploying modern neural networks on resource-constrained edge devices requires a series of optimizations to prepare them for production. These optimizations typically involve pruning, quantization, and fixed-point conversion to compress the model size and improve energy efficiency. While these optimizations are generally sufficient for most edge devices, there is potential for further improving energy efficiency by leveraging special-purpose hardware and unconventional computing paradigms. In this work, we investigate stochastic computing neural networks and their impact on quantization and overall performance with respect to weight distributions. When arithmetic operations such as addition and multiplication are performed by stochastic computing hardware, the arithmetic error may increase significantly, resulting in reduced overall accuracy. To bridge the accuracy gap between a fixed-point model and its stochastic computing implementation, we propose a new approximate arithmetic aware training method. We demonstrate the effectiveness of our approach by implementing the LeNet-5 convolutional neural network on an FPGA. Our experimental results show a negligible accuracy degradation of only 0.01% compared to the floating-point outcome, while achieving a significant 27x speedup and 33x improvement in energy efficiency compared to other FPGA implementations. Moreover, the proposed method increases the likelihood of selecting optimum LFSR seeds for SC systems.
更多
查看译文
关键词
Stochastic computing,Edge computing,Convolutional neural networks,LFSR seed,Quantization (signal)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要