Efficient Fast Convolution Architecture Based On Stochastic Computing

2017 9TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP)(2017)

引用 2|浏览5
暂无评分
摘要
Advances in convolutional neural network (CNN) have aroused great interests all over the world. Despite the fact that the amount of convolutions in CNNs is proportional to that of layers, people tend to pursue more remarkable performance by exploiting a deep convolution neural network (DCNN), leading to large area occupation. With the deeper process involved in large-scale integrated circuits, circuit reliability has also aroused great concerns. In this paper, we propose an efficient convolution architecture for CNN, by employing stochastic computing. For the first step, fast convolution algorithms such as Cook-Toom algorithm, which can lower the complexity by reducing multiplications is proposed. Though stochastic computing (SC) shows advantages in area-efficient implementations, its straightforward application to fast convolution is not well-suited due to the severe precision loss. Existing two-line SC can reduce the precision loss, but fails to deal with the output saturation. Therefore, a new three-part SC representation is proposed to get rid of this dilemma. Simulation results have validated the advantages of proposed architecture in both complexity and precision. Although preliminary, it is expected that this design can be the first step of combining neural network and stochastic computing, which are both analog, belief-based, and fault-tolerant. It is believed that stochastic computing is possibly a more natural implementation form of neural network.
更多
查看译文
关键词
Convolutional neural network (CNN), fast convolution, Cook-Toom algorithm, stochastic computing (SC)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要