Stable Spike-Timing Dependent Plasticity Rule For Multilayer Unsupervised And Supervised Learning

2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2017)

引用 38|浏览36
暂无评分
摘要
Spike-Timing Dependent Plasticity (STDP), the canonical learning rule for spiking neural networks (SNN), is gaining tremendous interest because of its simplicity, efficiency and biological plausibility. However, to date, multilayer feed-forward networks of spiking neurons are either only partially trained using STDP or pre-trained using traditional deep neural networks which are converted to deep spiking neural networks or a two-layer network where STDP learnt features are manually labelled. In this work, we present a low-cost, simplified, yet stable STDP rule for layer-wise unsupervised and supervised training of a multilayer feed-forward SNN. We propose to approximate Bayesian neuron using Stochastic Integrate and Fire (SIF) neuron model and introduce a supervised learning approach using teacher neurons to train the classification layer with one neuron per class. A SNN is trained for classification of handwritten digits with multiple layers of spiking neurons, including both the feature extraction and classification layer, using the proposed STDP rule. Our method achieves comparable to better accuracy on MNIST dataset than manually labelled two layer networks for the same sized hidden layer. We also analyze the parameter space to provide rationales for parameter fine-tuning and provide additional methods to improve noise resilience and input intensity variations. We further propose a Quantized 2-Power Shift (Q2PS) STDP rule, which reduces the implementation cost of digital hardware while achieves comparable performance.
更多
查看译文
关键词
spiking neural network, STDP, digit recognition, unsupervised learning, supervised learning, quantized STDP
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要