Gaining the Sparse Rewards by Exploring Lottery Tickets in Spiking Neural Network
arxiv(2023)
摘要
Deploying energy-efficient deep learning algorithms on computational-limited
devices, such as robots, is still a pressing issue for real-world applications.
Spiking Neural Networks (SNNs), a novel brain-inspired algorithm, offer a
promising solution due to their low-latency and low-energy properties over
traditional Artificial Neural Networks (ANNs). Despite their advantages, the
dense structure of deep SNNs can still result in extra energy consumption. The
Lottery Ticket Hypothesis (LTH) posits that within dense neural networks, there
exist winning Lottery Tickets (LTs), namely sub-networks, that can be obtained
without compromising performance. Inspired by this, this paper delves into the
spiking-based LTs (SLTs), examining their unique properties and potential for
extreme efficiency. Then, two significant sparse Rewards are
gained through comprehensive explorations and meticulous experiments on SLTs
across various dense structures. Moreover, a sparse algorithm tailored for
spiking transformer structure, which incorporates convolution operations into
the Patch Embedding Projection (ConvPEP) module, has been proposed to achieve
Multi-level Sparsity (MultiSp). MultiSp refers to (1) Patch number sparsity;
(2) ConvPEP weights sparsity and binarization; and (3) ConvPEP activation layer
binarization. Extensive experiments demonstrate that our method achieves
extreme sparsity with only a slight performance decrease, paving the way for
deploying energy-efficient neural networks in robotics and beyond.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要