Exploiting Hybrid Precision for Training and Inference: A 2T-1FeFET Based Analog Synaptic Weight Cell

2018 IEEE International Electron Devices Meeting (IEDM)(2018)

引用 91|浏览7
暂无评分
摘要
In-memory computing with analog non-volatile memories (NVMs) can accelerate both the in-situ training and inference of deep neural networks (DNNs) by parallelizing multiply-accumulate (MAC) operations in the analog domain. However, the in-situ training accuracy suffers from unacceptable degradation due to undesired weight-update asymmetry/nonlinearity and limited bit precision. In this work, we overcome this challenge by introducing a compact Ferroelectric FET (FeFET) based synaptic cell that exploits hybrid precision for in-situ training and inference. We propose a novel hybrid approach where we use modulated “volatile” gate voltage of FeFET to represent the least significant bits (LSBs) for symmetric/linear update during training only, and use “non-volatile” polarization states of FeFET to hold the information of most significant bits (MSBs) for inference. This design is demonstrated by the experimentally validated FeFET SPICE model and cosimulation with the TensorFlow framework. The results show that with the proposed 6-bit and 7-bit synapse design, the insitu training accuracy can achieve ~97.3% on MNIST dataset and ~87% on CIFAR-10 dataset, respectively, approaching the ideal software based training.
更多
查看译文
关键词
In-memory computing,analog nonvolatile memories,deep neural networks,multiply-accumulate operations,analog domain,limited bit precision,modulated volatile gate voltage,nonvolatile polarization states,ideal software based training,in situ training accuracy,symmetric-linear update,compact ferroelectric FET,undesired weight-update asymmetry-nonlinearity,2T-1FeFET based analog synaptic weight cell,in-situ inference,FeFET SPICE model,TensorFlow framework,MNIST dataset,CIFAR-10 dataset
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要