Identity-linked Group Channel Pruning for Deep Neural Networks

2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2021)

引用 5|浏览14
暂无评分
摘要
Channel pruning is a commonly used model compression in convolutional neural network. The structured pruning using sparse constraints can automatically learn the importance of parameters during the training process by imposing sparse constraints on parameters. However, existing pruning methods based on sparse constraints cannot process the final convolutional layer of the residual module with complex connections. Due to the existence of residual connection, if the final convolutional layer of the residual module is pruned, the sparse channel of the feature map from residual connection does not correspond to the feature map from module output, which will cause the parameters to be unable to be pruned. This paper studies this problem and proposes an identity association group pruning algorithm, which we call IGP. IGP groups the parameters and channels that generate the corresponding feature maps, uses Group Lasso to sparse the same group of parameters as a whole, and forces the sparseness of the parameters with sparse correlation to be consistent with each other. Experiments show that when IGP compresses ResNet56 60% parameters, the model performance only drops 0.36%, which is better than the existing pruning method based on sparse constraints. In the case of high compression ratio, IGP can compresses ResNet-50 compressesed with 87% parameters and the performance drops only 0.76%, which is 5.17% higher than the existing methods.
更多
查看译文
关键词
Channel Pruning, Sparsity consistency, Sparse training, residual connection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要