Towards the Out-of-Distribution Generalization of Contrastive Self-Supervised Learning

Xuyang Zhao, Tianqi Du,Yisen Wang, Jun Yao,Weiran Huang

ICLR 2023(2023)

引用 0|浏览84
Self-supervised learning attracts much attention recently, since it does not require labeled data for training contrasted to supervised learning. Empirical studies also observe that it has better transfer ability than supervised learning. However, the theoretical study of the out-of-distribution (OOD) generalization ability of self-supervised learning is still limited. In this paper, by focusing on the data augmentation used in SSL, we establish a theoretical framework for the OOD performance of contrastive-based self-supervised learning. Although some recent work claims that contrastive learning learns more robust representations than supervised learning, our results suggest that this superiority mainly comes from the data augmentation used, i.e., more data are fed to the model. In the face of more challenging OOD scenarios, the standard contrastive learning still suffers from the same generalization problem as empirical risk minimization (ERM). Based on our theoretical results, we propose an augmentation-robust contrastive learning approach, named as ArCL, which significantly improves the OOD performance of contrastive learning in several datasets.
Contrastive learning,out-of-distribution generalization
AI 理解论文