Ferroelectric FET analog synapse for acceleration of deep neural network training

2017 IEEE International Electron Devices Meeting (IEDM)(2017)

引用 427|浏览64
暂无评分
摘要
The memory requirement of at-scale deep neural networks (DNN) dictate that synaptic weight values be stored and updated in off-chip memory such as DRAM, limiting the energy efficiency and training time. Monolithic cross-bar / pseudo cross-bar arrays with analog non-volatile memories capable of storing and updating weights on-chip offer the possibility of accelerating DNN training. Here, we harness the dynamics of voltage controlled partial polarization switching in ferroelectric-FETs (FeFET) to demonstrate such an analog synapse. We develop a transient Presiach model that accurately predicts minor loop trajectories and remnant polarization charge (P r ) for arbitrary pulse width, voltage, and history. We experimentally demonstrate a 5-bit FeFET synapse with symmetric potentiation and depression characteristics, and a 45x tunable range in conductance with 75ns update pulse. A circuit macro-model is used to evaluate and benchmark on-chip learning performance (area, latency, energy, accuracy) of FeFET synaptic core revealing a 10 3 to 10 6 acceleration in online learning latency over multi-state RRAM based analog synapses.
更多
查看译文
关键词
ferroelectric FET analog synapse,memory requirement,synaptic weight values,off-chip memory,DRAM,energy efficiency,training time,monolithic cross-bar,pseudocross-bar arrays,analog nonvolatile memories,DNN training,ferroelectric-FETs,transient Presiach model,remnant polarization charge,arbitrary pulse width,symmetric potentiation,depression characteristics,on-chip learning performance,FeFET synaptic core,multistate RRAM based analog synapses,voltage controlled partial polarization switching,deep neural network training acceleration,circuit macromodel,minor loop trajectories prediction,online learning latency,time 75.0 ns,word length 5 bit
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要