Fully parallel RRAM synaptic array for implementing binary neural network with (+1, -1) weights and (+1, 0) neurons.

ASP-DAC(2018)

引用 90|浏览58
暂无评分
摘要
Binary Neural Networks (BNNs) have been recently proposed to improve the area-/energy-efficiency of the machine/deep learning hardware accelerators, which opens an opportunity to use the technologically more mature binary RRAM devices to effectively implement the binary synaptic weights. In addition, the binary neuron activation enables using the sense amplifier instead of the analog-to-digital converter to allow bitwise communication between layers of the neural networks. However, the sense amplifier has intrinsic offset that affects the threshold of binary neuron, thus it may degrade the classification accuracy. In this work, we analyze a fully parallel RRAM synaptic array architecture that implements the fully connected layers in a convolutional neural network with (+1, −1) weights and (+1, 0) neurons. The simulation results with TSMC 65 nm PDK show that the offset of current mode sense amplifier introduces a slight accuracy loss from ∼98.5% to ∼97.6% for MNIST dataset. Nevertheless, the proposed fully parallel BNN architecture (P-BNN) can achieve 137.35 TOPS/W energy efficiency for the inference, improved by ∼20X compared to the sequential BNN architecture (S-BNN) with row-by-row read-out scheme. Moreover, the proposed P-BNN architecture can save the chip area by ∼16% as it eliminates the area overhead of MAC peripheral units in the S-BNN architecture.
更多
查看译文
关键词
machine/deep learning hardware accelerators,binary synaptic weights,binary neuron activation,fully parallel RRAM synaptic array architecture,convolutional neural network,current mode sense amplifier,fully parallel BNN architecture,binary neural network,energy efficiency,binary RRAM devices,TSMC PDK,P-BNN,sequential BNN architecture,S-BNN,row-by-row read-out scheme,MAC peripheral units,MNIST dataset,size 65.0 nm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要