Slimconv: Reducing Channel Redundancy In Convolutional Neural Networks By Features Recombining

IEEE TRANSACTIONS ON IMAGE PROCESSING(2021)

引用 19|浏览94
暂无评分
摘要
The channel redundancy of convolutional neural networks (CNNs) results in the large consumption of memories and computational resources. In this work, we design a novel Slim Convolution (SlimConv) module to boost the performance of CNNs by reducing channel redundancies. Our SlimConv consists of three main steps: Reconstruct, Transform, and Fuse. It aims to reorganize and fuse the learned features more efficiently, such that the method can compress the model effectively. Our SlimConv is a plug-and-play architectural unit that can be used to replace convolutional layers in CNNs directly. We validate the effectiveness of SlimConv by conducting comprehensive experiments on various leading benchmarks, such as ImageNet, MS COCO2014, Pascal VOC2012 segmentation, and Pascal VOC2007 detection datasets. The experiments show that SlimConv-equipped models can achieve better performances consistently, less consumption of memory and computation resources than non-equipped counterparts. For example, the ResNet-101 fitted with SlimConv achieves 77.84% top-1 classification accuracy with 4.87 GFLOPs and 27.96M parameters on ImageNet, which shows almost 0.5% better performance with about 3 GFLOPs and 38% parameters reduced.
更多
查看译文
关键词
Convolution, Computational modeling, Redundancy, Task analysis, Kernel, Image reconstruction, Transforms, Slim convolution, channel redundancy, image classification, model compression
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要