Accelerating Sparse DNN Models without Hardware-Support via Tile-Wise Sparsity

SC(2020)

引用 72|浏览73
暂无评分
摘要
Network pruning can reduce the high computation cost of deep neural network (DNN) models. However, to maintain their accuracies, sparse models often carry randomly-distributed weights, leading to irregular computations. Consequently, sparse models cannot achieve meaningful speedup on commodity hardware (e.g., GPU) built for dense matrix computations. As such, prior works usually modify or design completely new sparsity-optimized architectures for exploiting sparsity. We propose an algorithm-software co-designed pruning method that achieves latency speedups on existing dense architectures. Our work builds upon the insight that the matrix multiplication generally breaks the large matrix into multiple smaller tiles for parallel execution. We propose a tiling-friendly “tile-wise” sparsity pattern, which maintains a regular pattern at the tile level for efficient execution but allows for irregular, arbitrary pruning at the global scale to maintain the high accuracy. We implement and evaluate the sparsity pattern on GPU tensor core, achieving a 1.95× speedup over the dense model.
更多
查看译文
关键词
hardware-support,tile-wise sparsity,network pruning,high computation cost,deep neural network models,sparse models,randomly-distributed weights,irregular computations,meaningful speedup,commodity hardware,dense matrix computations,sparsity-optimized architectures,exploiting sparsity,pruning method,latency speedups,dense architectures,matrix multiplication,multiple smaller tiles,tiling-friendly tile-wise,tile level,irregular pruning,arbitrary pruning,sparsity pattern,dense model,sparse DNN models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要