The Impact of Analog-to-Digital Converter Architecture and Variability on Analog Neural Network Accuracy

IEEE JOURNAL ON EXPLORATORY SOLID-STATE COMPUTATIONAL DEVICES AND CIRCUITS(2023)

引用 0|浏览4
暂无评分
摘要
The analog-to-digital converter (ADC) is not only a key component in analog in-memory computing (IMC) accelerators but also a bottleneck for the efficiency and accuracy of these systems. While the tradeoffs between power consumption, latency, and area in ADC design are well studied, it is relatively unknown which ADC implementations are optimal for algorithmic accuracy, particularly for neural network inference. We explore the design space of the ADC with a focus on accuracy, investigating the sensitivity of neural network outputs to component variability inside the ADC and how this sensitivity depends on the ADC architecture. The compact models of the pipeline, cyclic, successive-approximation-register (SAR) and ramp ADCs are developed, and these models are used in a system-level accuracy simulation of analog neural network inference. Our results show how the accuracy on a complex image recognition benchmark (ResNet50 on ImageNet) depends on the capacitance mismatch, comparator offset, and effective number of bits (ENOB) for each of the four ADC architectures. We find that robustness to component variations depends strongly on the ADC design and that inference accuracy is particularly sensitive to the value-dependent error characteristics of the ADC, which cannot be captured by the conventional ENOB precision metric.
更多
查看译文
关键词
Analog computing,analog-to-digital conversion,in-memory computing (IMC),machine learning,neural network,process variations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要