Improving generalization of convolutional neural network through contrastive augmentation.

Knowl. Based Syst.(2023)

引用 3|浏览5
暂无评分
摘要
Data augmentation is widely used to improve the generalization ability of convolutional neural networks in the image domain. The conventional augmentation schemes, e.g., single augmentation (including RandAug, Mixup, and CutMix) or batch augmentation, only optimize the network from pairs of augmented images and corresponding labels, leading to a loss of discriminative representation learning. To this end, we propose contrastive augmentation (CA), to learn discriminative representation by capturing the contrastive semantics among augmented samples. Specifically, the proposed CA method explicitly regularizes the similarity of augmented sample pairs to enable the model learn contrastive semantics. Equipped with CA at the training stage, the model simultaneously learns classification representation and contrastive semantics, which makes representation discriminative and does not incur additional inference computational costs. On conventional and fine-grained classification tasks, the experimental results show that the proposed method can effectively improve the generalization capability of convolutional neural networks, including ResNet, EfficientNet, MobileNet, ShuffleNet, and VAN. The compatibility experiment shows that our contrastive augmentation can be combined with other augmentation techniques, e.g., Mixup and CutMix, to further improve the model’s performance.
更多
查看译文
关键词
Regularization, Contrastive augmentation, Convolutional neural network, Discriminative representation learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要