MetaMixUp: Learning Adaptive Interpolation Policy of MixUp With Metalearning

IEEE Transactions on Neural Networks and Learning Systems(2022)

引用 32|浏览294
暂无评分
摘要
MixUp is an effective data augmentation method to regularize deep neural networks via random linear interpolations between pairs of samples and their labels. It plays an important role in model regularization, semisupervised learning (SSL), and domain adaption. However, despite its empirical success, its deficiency of randomly mixing samples has poorly been studied. Since deep networks are capable of memorizing the entire data set, the corrupted samples generated by vanilla MixUp with a badly chosen interpolation policy will degrade the performance of networks. To overcome overfitting to corrupted samples, inspired by metalearning (learning to learn), we propose a novel technique of learning to a mixup in this work, namely, MetaMixUp. Unlike the vanilla MixUp that samples interpolation policy from a predefined distribution, this article introduces a metalearning-based online optimization approach to dynamically learn the interpolation policy in a data-adaptive way (learning to learn better). The validation set performance via metalearning captures the noisy degree, which provides optimal directions for interpolation policy learning. Furthermore, we adapt our method for pseudolabel-based SSL along with a refined pseudolabeling strategy. In our experiments, our method achieves better performance than vanilla MixUp and its variants under SL configuration. In particular, extensive experiments show that our MetaMixUp adapted SSL greatly outperforms MixUp and many state-of-the-art methods on CIFAR-10 and SVHN benchmarks under the SSL configuration.
更多
查看译文
关键词
Deep learning,metalearning,MixUp,regularization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要