Efficient Hardware Implementation for Online Local Learning in Spiking Neural Networks

2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS)(2022)

引用 1|浏览4
暂无评分
摘要
Local learning schemes have shown promising performance in spiking neural networks and are considered as a step towards more biologically plausible learning. Despite many efforts to design high-performance neuromorphic systems, a fast and efficient neuromorphic hardware system is still missing. This work proposes a scalable, fast, and efficient spiking neuromorphic hardware system with on-chip local learning capability that can achieve competitive classification accuracy. We introduce an effective hardware-friendly local training algorithm that is compatible with sparse temporal input coding and binary random classification weights. The algorithm is demonstrated to deliver competitive accuracy. The proposed digital system explores spike sparsity in communication, parallelism in vector-matrix operations, and locality of training errors, which leads to low cost and fast training speed. Taking into consideration energy, speed, resource, and accuracy, our design shows 7.7× efficiency over a recent spiking direct feedback alignment method and 2.7× efficiency over the spike-timing-dependent plasticity method.
更多
查看译文
关键词
Neuromorphic computing,spiking neural networks,local training,deep learning,backpropagation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要