Enhanced regularization for on-chip training using analog and temporary memory weights

NEURAL NETWORKS(2023)

引用 0|浏览4
暂无评分
摘要
In-memory computing techniques are used to accelerate artificial neural network (ANN) training and inference tasks. Memory technology and architectural innovations allow efficient matrix-vector multiplications, gradient calculations, and updates to network weights. However, on-chip learning for edge devices is quite challenging due to the frequent updates. Here, we propose using an analog and temporary on-chip memory (ATOM) cell with controllable retention timescales for implementing the weights of an on-chip training task. Measurement results for Read-Write timescales are presented for an ATOM cell fabricated in GlobalFoundries' 45 nm RFSOI technology. The effect of limited retention and its variability is evaluated for training a fully connected neural network with a variable number of layers for the MNIST hand-written digit recognition task. Our studies show that weight decay due to temporary memory can have benefits equivalent to regularization, achieving a -33% reduction in the validation error (from 3.6% to 2.4%). We also show that the controllability of the decay timescale can be advantageous in achieving a further -26% reduction in the validation error. This strongly suggests the utility of temporary memory during learning before on-chip non-volatile memories can take over for the storage and inference tasks using the neural network weights. We thus propose an algorithm -circuit codesign in the form of temporary analog memory for high-performing on-chip learning of ANNs.(c) 2023 Elsevier Ltd. All rights reserved.
更多
查看译文
关键词
Temporary memory,Regularization,On-chip learning,Artificial neural network,In-memory computing,ML hardware
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要