Studying the plasticity in deep convolutional neural networks using random pruning

Machine Vision and Applications(2019)

引用 21|浏览66
暂无评分
摘要
Recently, there has been a lot of work on pruning filters from deep convolutional neural networks (CNNs) with the intention of reducing computations. The key idea is to rank the filters based on a certain criterion (say, l_1 -norm, average percentage of zeros, etc.) and retain only the top-ranked filters. Once the low-scoring filters are pruned away, the remainder of the network is fine-tuned and is shown to give performance comparable to the original unpruned network. In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen, but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned. Specifically, we show counterintuitive results wherein by randomly pruning 25–50
更多
查看译文
关键词
Deep learning,Filter pruning,Model compression,Convolutional neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要