A scalable implementation of the recursive least-squares algorithm for training spiking neural networks

biorxiv(2022)

引用 0|浏览8
暂无评分
摘要
Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a prominent tool to study computations in the brain. With an increasing size and complexity of neural recordings, there is a need for fast algorithms that can scale to large datasets. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation allows training networks to reproduce neural activity of an order of millions neurons at order of magnitude times faster than the CPU implementation. We demonstrate this by applying our algorithm to reproduce the activity of >66,000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables efficient training of large-scale spiking models, thus allowing for in-silico study of the dynamics and connectivity underlying multi-area computations. ### Competing Interest Statement The authors have declared no competing interest.
更多
查看译文
关键词
neural networks,scalable implementation,least-squares
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要