Pruning the deep neural network by similar function

Hanqing Liu,Bo Xin, Senlin Mu,Zhangqing Zhu

Journal of Physics Conference Series(2019)

引用 0|浏览0
暂无评分
摘要
Recent deep neural networks become deeper and deeper, while the demand for low computational cost model will be higher and higher. The exists pruning algorithm usually focus on pruning the network layer by layer, or using the weight sum as important score. However, these methods do not work very well. In this paper, we propose a unified framework to accelerate and compress cumbersome CNN models. We put it into an optimization problem to find a subset of the model which can produce the most comparable outputs. We concentrate on filter level pruning. Experiment shows that our method has surpassed the exists filter level pruning algorithm. Taking the network as a whole is better than pruning it layer by layer. We also have an experiment on the large scale ImageNet dataset The result shows that we can accelerate the VGG-16 by 3.18 x without accuracy drop.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要