A 28-nm 8-bit Floating-Point Tensor Core-Based Programmable CNN Training Processor With Dynamic Structured Sparsity

IEEE Journal of Solid-State Circuits(2023)

引用 0|浏览21
暂无评分
摘要
Training deep/convolutional neural networks (DNNs/CNNs) requires a large amount of memory and iterative computation, which necessitates speedup and energy reduction, especially for edge devices with resource/energy constraints. In this work, we present an 8-bit floating-point (FP8) training the processor which implements: 1) highly parallel tensor cores (fused multiply-add trees) that maintain high utilization throughout forward propagation (FP), backward propagation (BP), and weight update (WU) phases of the training process; 2) hardware-efficient channel gating for dynamic output activation sparsity; 3) dynamic weight sparsity (WS) based on group Lasso; and 4) gradient skipping based on the FP prediction error. We develop a custom instruction set architecture (ISA) to flexibly support different CNN topologies and training parameters. The 28-nm prototype chip demonstrates large improvements in floating-point operations (FLOPs) reduction (7.3x), energy efficiency (16.4 TFLOPS/W), and overall training latency speedup (4.7x), for both supervised and self-supervised training tasks.
更多
查看译文
关键词
Activation sparsity,CNNs,convolutional neural network (CNN) training processor,energy-efficient accelerator,weight sparsity (WS)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络