Noise tolerant ternary weight deep neural networks for analog in-memory inference

2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2021)

引用 2|浏览19
暂无评分
摘要
Analog in memory computing (AiMC) is a promising hardware solution to efficiently perform inference with deep neural networks (DNNs). Similar to digital DNN accelerators, AiMC systems benefit from aggressively quantized DNNs. In addition, AiMC systems also suffer from noise on activations and weights. Training strategies to condition DNNs against weight noise can increase the efficiency of AiMC systems by enabling the use of more compact but more noisy weight memory devices. In this work, we utilize noise-aware training and introduce gradual noise training and network width scaling to increase the tolerance of DNNs against weight noise. Our results show that noise-aware training and gradual noise training drastically lowers the impact of weight noise without changing the network size. By utilizing network width scaling, the weight noise tolerance is increased even more with the penalty of more network parameters.
更多
查看译文
关键词
Deep Neural Networks, Quantization, Hardware-aware training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要