TIME:A Training-in-memory Architecture for RRAM-based Deep Neural Networks

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2019)

引用 71|浏览78
暂无评分
摘要
The training of neural networks (NN) is usually time-consuming and resource intensive. The emerging metal-oxide resistive random-access memory (RRAM) device has shown potential for the computation of NN. RRAM crossbar structure and multibit characteristics can perform the matrix-vector product in high energy efficiency, which is the most common operation of NN. Two challenges exist for realizing training NN based on RRAM. First, the current architectures based on RRAM only support the inference in training NN and cannot perform the backpropagation (BP) and the weight update of training NN. Second, training NN requires enormous iterations to constantly update the weights for reaching the convergence. However, this weight update leads to large energy consumption because of the nonideal factors of RRAM. In this paper, we propose a training-in-memory based on RRAM (TIME) architecture and the peripheral circuit design to enable training NN on RRAM. TIME supports the BP and the weight update while maximizing the re-usage of peripheral circuits of the inference operation on RRAM. Meanwhile, a set of optimization strategies focusing on the nonideal factors are designed to reduce the cost of tuning RRAM. We explore the performance of both supervised learning (SL) and deep reinforcement learning (DRL) on TIME. A specific mapping method of DRL is also introduced to further improve energy efficiency. Simulation results show that in SL, TIME can achieve $5.3{\times }$ higher energy efficiency on average compared with DaDianNao, an application-specific integrated circuits (ASIC) in CMOS technology. In DRL, TIME can perform an average $126{\times }$ higher than GPU in energy efficiency. If the cost of tuning RRAM can be further reduced, TIME has the potential to boost the energy efficiency by two orders of magnitudes compared with ASIC.
更多
查看译文
关键词
Tuning,Training,Resistance,Artificial neural networks,Computer architecture,Supervised learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要