Random sparse adaptation for accurate inference with inaccurate multi-level RRAM arrays

2017 IEEE International Electron Devices Meeting (IEDM)(2017)

引用 34|浏览56
暂无评分
摘要
An array of multi-level resistive memory devices (RRAMs) can speed up the computation of deep learning algorithms. However, when a pre-trained model is programmed to a real RRAM array for inference, its accuracy degrades due to many non-idealities, such as variations, quantization error, and stuck-at faults. A conventional solution involves multiple read-verify-write (R-V-W) for each RRAM cell, costing a long time because of the slow Write speed and cell-by-cell compensation. In this work, we propose a fundamentally new approach to overcome this issue: random sparse adaptation (RSA) after the model is transferred to the RRAM array. By randomly selecting a small portion of model parameters and mapping them to on-chip memory for further training, we demonstrate an efficient and fast method to recover the accuracy: in CNNs for MNIST and CIFAR-10, ~5% of model parameters is sufficient for RSA even under excessive RRAM variations. As the back-propagation in training is only applied to RSA cells and there is no need of any Write operation on RRAM, the proposed RSA achieves 10-100X acceleration compared to R-V-W. Therefore, this hybrid solution with a large, inaccurate RRAM array and a small, accurate on-chip memory array promises both area efficiency and inference accuracy.
更多
查看译文
关键词
model parameters,excessive RRAM variations,RSA cells,on-chip memory array,inference accuracy,random sparse adaptation,inaccurate multilevel RRAM arrays,multilevel resistive memory devices,deep learning algorithms,pre-trained model,nonidealities,RRAM cell,cell-by-cell compensation,read-verify-write,CNN,MNIST,CIFAR-10
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要