Low Latency and Sparse Computing Spiking Neural Networks With Self-Driven Adaptive Threshold Plasticity.

IEEE transactions on neural networks and learning systems(2023)

引用 1|浏览15
暂无评分
摘要
Spiking neural networks (SNNs) have captivated the attention worldwide owing to their compelling advantages in low power consumption, high biological plausibility, and strong robustness. However, the intrinsic latency associated with SNNs during inference poses a significant challenge, impeding their further development and application. This latency is caused by the need for spiking neurons to collect electrical stimuli and generate spikes only when their membrane potential exceeds a firing threshold. Considering the firing threshold plays a crucial role in SNN performance, this article proposes a self-driven adaptive threshold plasticity (SATP) mechanism, wherein neurons autonomously adjust the firing thresholds based on their individual state information using unsupervised learning rules, of which the adjustment is triggered by their own firing events. SATP is based on the principle of maximizing the information contained in the output spike rate distribution of each neuron. This article derives the mathematical expression of SATP and provides extensive experimental results, demonstrating that SATP effectively reduces SNN inference latency, further reduces the computation density while improving computational accuracy, so that SATP facilitates SNN models to be with low latency, sparse computing, and high accuracy.
更多
查看译文
关键词
Low latency inference,neuronal firing threshold,self-driven adaptive threshold plasticity (SATP),sparse computing,spiking neural network (SNN)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要