Floating Gate Transistor-Based Accurate Digital In-Memory Computing for Deep Neural Networks

ADVANCED INTELLIGENT SYSTEMS(2022)

引用 28|浏览21
暂无评分
摘要
To improve the computing speed and energy efficiency of deep neural network (DNN) applications, in-memory computing with nonvolatile memory (NVM) is proposed to address the time-consuming and energy-hungry data shuttling issue. Herein, a digital in-memory computing method for convolution computing, which holds the key to DNNs, is proposed. Based on the proposed method, a floating gate transistor-based in-memory computing chip for accurate convolution computing with high parallelism is created. The proposed digital in-memory computing method can achieve the central processing unit (CPU)-equivalent precision with the same neural network architecture and parameters, different from the analogue or digital-analogue-mixed in-memory computing techniques. Based on the fabricated floating gate transistor-based in-memory computing chip, a hardware LeNet-5 neural network is built. The chip achieves 96.25% accuracy on the full Modified National Institute of Standards and Technology database, which is the same as the result computed by the CPU with the same neural network architecture and parameters.
更多
查看译文
关键词
deep neural networks, flash memory, floating gate transistors, in-memory computing, parallel computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要