Deeper Weight Pruning without Accuracy Loss in Deep Neural Networks

2020 Design, Automation & Test in Europe Conference & Exhibition (DATE)(2020)

引用 4|浏览9
暂无评分
摘要
This work overcomes the inherent limitation of the bit-level weight pruning, that is, the maximal computation speedup is bounded by the total number of non-zero bits of the weights and the bound is invariably considered "uncontrollable" (i.e., constant) for the neural network to be pruned. Precisely, this work, based on the canonical signed digit (CSD) encoding, (1) proposes a transformation technique which converts the two’s complement representation of every weight into a set of CSD representations of the minimal or near-minimal number of essential (i.e., non-zero) bits, (2) formulates the problem of selecting CSD representations of weights that maximize the parallelism of bit-level multiplication on the weights into a multi-objective shortest path problem and solves it efficiently using an approximation algorithm, and (3) proposes a supporting novel acceleration architecture with no additional inclusion of non-trivial hardware. Through experiments, it is shown that our proposed approach reduces the number of essential bits by 69% on AlexNet and 74% on VGG-16, by which our accelerator reduces the inference computation time by 47% on AlexNet and 50% on VGG-16 over the conventional bit-level weight pruning.
更多
查看译文
关键词
bit-level weight pruning,acceleration architecture,deeper weight pruning,maximal computation speedup,deep neural networks,nontrivial hardware,multiobjective shortest path problem,bit-level multiplication,near-minimal number,CSD representations,transformation technique,canonical signed digit encoding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要