CEModule: A Computation Efficient Module for Lightweight Convolutional Neural Networks

IEEE transactions on neural networks and learning systems(2023)

引用 11|浏览12
暂无评分
摘要
Lightweight convolutional neural networks (CNNs) rely heavily on the design of lightweight convolutional modules (LCMs). For an LCM, lightweight design based on repetitive feature maps (LoR) is currently one of the most effective approaches. An LoR mainly involves an extraction of feature maps from convolutional layers (CE) and feature map regeneration through cheap operations (RO). However, existing LoR approaches carry out lightweight improvements only from the aspect of RO but ignore the problems of poor generalization, low stability, and high computation workload incurred in the CE part. To alleviate these problems, this article introduces the concept of key features from a CNN model interpretation perspective. Subsequently, it presents a novel LCM, namely CEModule, focusing on the CE part. CEModule increases the number of key features to maintain a high level of accuracy in classification. In the meantime, CEModule employs a group convolution strategy to reduce floating-point operations (FLOPs) incurred in the training process. Finally, this article brings forth a dynamic adaptation algorithm ( $\alpha $ -DAM) to enhance the generalization of CEModule-enabled lightweight CNN models, including the developed CENet in dealing with datasets of different scales. Compared with the state-of-the-art results, CEModule reduces FLOPs by up to 54% on CIFAR-10 while maintaining a similar level of accuracy in classification. On ImageNet, CENet increases accuracy by 1.2% following the same FLOPs and training strategies.
更多
查看译文
关键词
Automated machine learning (AutoML),convolutional neural networks (CNNs),feature map regeneration,hyperparameter optimization (HPO),lightweight,neural network interpretation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要