C-DNN: A 24.5-85.8TOPS/W Complementary-Deep-Neural-Network Processor with Heterogeneous CNN/SNN Core Architecture and Forward-Gradient-Based Sparsity Generation.

ISSCC(2023)

引用 7|浏览8
暂无评分
摘要
Spiking-Neural-Networks (SNNs) have been studied for a long time, and recently have been shown to achieve the same accuracy as Convolutional-Neural-Networks (CNNs). By using CNN-to-SNN conversion, SNNs become a promising candidate for ultra-low power Al applications [1]. For example, compared to BNNs or XOR-nets, SNNs provide lower power consumption and higher accuracy [2]. This is because SNNs perform spike-based event-driven operation with high spike sparsity, unlike a CNN's frame-driven operation. Fig. 22.5.1 shows that the energy consumption of a SNN fluctuates up and down along the layers depending on spike sparsity which changes with each layer, whereas a CNN shows comparatively lower variation. Also, SNNs offer low-power training by generating a Forward-Gradient (FG) which is computed as the time difference between a pre-spike and post-spike similar to STDP in a biological neuron [3]. However, SNN accuracy is lower than a CNN, and SNN supervised training, such as back-propagation through time (BPTT), also shows low accuracy. Conversely, CNNs can achieve high accuracy by back-propagation (BP) training, but this requires heavy computation due to iterative BP and gradient generation (GG). CNNs and SNNs have been unique research areas, however, they have complementary advantages and there is a ground-breaking possibility that they can be combined complementarily to perform energy-efficient inference and training with high accuracy.
更多
查看译文
关键词
24.5-85.8TOPS/W complementary-deep-neural-network processor,back-propagation training,C-DNN,CNN frame-driven operation,CNN-to-SNN conversion,comparatively lower variation,convolutional-neural-networks,energy consumption,event-driven operation,forward-gradient-based sparsity generation,heterogeneous CNN/SNN core architecture,high spike sparsity,low-power training,post-spike,pre-spike,SNN accuracy,spiking-neural-networks,time difference,ultra-low power Al applications
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要