A Fault-Tolerant Neural Network Architecture

Proceedings of the 56th Annual Design Automation Conference 2019(2019)

引用 86|浏览106
暂无评分
摘要
New DNN accelerators based on emerging technologies, such as resistive random access memory (ReRAM), are gaining increasing research attention given their potential of "in-situ" data processing. Unfortunately, device-level physical limitations that are unique to these technologies may cause weight disturbance in memory and thus compromising the performance and stability of DNN accelerators. In this work, we propose a novel fault-tolerant neural network architecture to mitigate the weight disturbance problem without involving expensive retraining. Specifically, we propose a novel collaborative logistic classifier to enhance the DNN stability by redesigning the binary classifiers augmented from both traditional error correction output code (ECOC) and modern DNN training algorithm. We also develop an optimized variable-length "decode-free" scheme to further boost the accuracy under fewer number of classifiers. Experimental results on cutting-edge DNN models and complex datasets show that the proposed fault-tolerant neural network architecture can effectively rectify the accuracy degradation against weight disturbance for DNN accelerators with low cost, thus allowing for its deployment in a variety of mainstream DNNs.
更多
查看译文
关键词
new DNN accelerators,resistive random access memory,in-situ data processing,device-level physical limitations,novel fault-tolerant neural network architecture,weight disturbance problem,cutting-edge DNN models,ReRAM,collaborative logistic classifier,DNN stability enhancement,error correction output code,ECOC,modern DNN training algorithm,optimized variable-length decode-free scheme,accuracy degradation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要