A Mixed-Signal Quantized Neural Network Accelerator Using Flash Transistors

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS(2023)

引用 0|浏览0
暂无评分
摘要
This paper presents a mixed-signal architecture for implementing Quantized Neural Networks (QNNs) using flash transistors, to achieve extremely high throughput with extremely low power, energy and memory requirements. Its low resource utilization makes our design especially suited for use in edge devices. The network weights are stored in-memory using flash transistors, and neurons perform operations in the analog current domain. Our design can be programmed with any QNN whose hyperparameters (the number of layers, filters, or filter size, etc) do not exceed the maximum provisioned. Once the flash devices are programmed with a trained model and our IC is given an input, our architecture performs inference with zero access to off-chip memory. We demonstrate the robustness of our design under current-mode non-linearities arising from process, voltage, and temperature (PVT) variations. We test validation accuracy on the ImageNet dataset, and show that our IC suffers only 0.71% and 0.92% reduction in classification accuracy for Top-1 and Top-5 outputs, respectively. Our implementation achieves between 2.1x and 125x better energy efficiency than previous NVM-based QNN accelerators. Our approach provides layer partitioning and neuron sharing options, which allow us to trade off latency, power, and area amongst each other.
更多
查看译文
关键词
Transistors,Nonvolatile memory,Energy efficiency,Neurons,Performance evaluation,Logic gates,Integrated circuits,Machine learning accelerators,quantized neural networks,floating-gate transistors,current-mode circuits,low-power circuits
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要