Neural Network Classifiers Using Stochastic Computing with a Hardware-Oriented Approximate Activation Function

2017 IEEE International Conference on Computer Design (ICCD)(2017)

引用 36|浏览22
暂无评分
摘要
Neural networks are becoming prevalent in many areas, such as pattern recognition and medical diagnosis. Stochastic computing is one potential solution for neural networks implemented in low-power back-end devices such as solar-powered devices and Internet-of-things (IoT) devices. In this paper, we investigate a new architecture of stochastic neural networks with a hardware-oriented approximate activation function. The new proposed approximate activation function can be omitted while keeping the functionality well. Thus, it reduces the stochastic implementation complexity and hardware costs. Moreover, the new architecture significantly improves recognition error rates compared to previous stochastic neural networks with sigmoid function. Three classical types of neural networks are explored, multiple layer perceptron (MLP), restricted Boltzmann machine (RBM) and convolutional neural networks (CNN). The experimental results indicate the new proposed architecture achieves more than 25%, 60% and 3× reduction than previous stochastic neural networks, and more than 30×, 30× and 52% reduction than conventional binary neural networks, in terms of area, power and energy, respectively, while maintaining the similar error rates compared to the conventional neural networks.
更多
查看译文
关键词
Neural network,Stochastic computing,Approximate activation function,Hardware implementation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要