EAS-SNN: End-to-End Adaptive Sampling and Representation for Event-based Detection with Recurrent Spiking Neural Networks
arxiv(2024)
摘要
Event cameras, with their high dynamic range and temporal resolution, are
ideally suited for object detection, especially under scenarios with motion
blur and challenging lighting conditions. However, while most existing
approaches prioritize optimizing spatiotemporal representations with advanced
detection backbones and early aggregation functions, the crucial issue of
adaptive event sampling remains largely unaddressed. Spiking Neural Networks
(SNNs), which operate on an event-driven paradigm through sparse spike
communication, emerge as a natural fit for addressing this challenge. In this
study, we discover that the neural dynamics of spiking neurons align closely
with the behavior of an ideal temporal event sampler. Motivated by this
insight, we propose a novel adaptive sampling module that leverages recurrent
convolutional SNNs enhanced with temporal memory, facilitating a fully
end-to-end learnable framework for event-based detection. Additionally, we
introduce Residual Potential Dropout (RPD) and Spike-Aware Training (SAT) to
regulate potential distribution and address performance degradation encountered
in spike-based sampling modules. Through rigorous testing on neuromorphic
datasets for event-based detection, our approach demonstrably surpasses
existing state-of-the-art spike-based methods, achieving superior performance
with significantly fewer parameters and time steps. For instance, our method
achieves a 4.4% mAP improvement on the Gen1 dataset, while requiring 38%
fewer parameters and three time steps. Moreover, the applicability and
effectiveness of our adaptive sampling methodology extend beyond SNNs, as
demonstrated through further validation on conventional non-spiking detection
models.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要