Self-Supervised Learning by Estimating Twin Class Distribution.

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society(2023)

引用 29|浏览220
暂无评分
摘要
We present Twist, a simple and theoretically explainable self-supervised representation learning method by classifying large-scale unlabeled datasets in an end-to-end way. We employ a siamese network terminated by a softmax operation to produce twin class distributions of two augmented images. Without supervision, we enforce the class distributions of different augmentations to be consistent. However, simply minimizing the divergence between augmentations will generate collapsed solutions, i.e., outputting the same class distribution for all images. In this case, little information about the input images is preserved. To solve this problem, we propose to maximize the mutual information between the input image and the output class predictions. Specifically, we minimize the entropy of the distribution for each sample to make the class prediction assertive, and maximize the entropy of the mean distribution to make the predictions of different samples diverse. In this way, Twist can naturally avoid the collapsed solutions without specific designs such as asymmetric network, stop-gradient operation, or momentum encoder. As a result, Twist outperforms previous state-of-the-art methods on a wide range of tasks. Specifically on the semi-supervised classification task, Twist achieves 61.2% top-1 accuracy with 1% ImageNet labels using a ResNet-50 as backbone, surpassing previous best results by an improvement of 6.2%. Codes and pre-trained models are available at https://github.com/bytedance/TWIST.
更多
查看译文
关键词
learning,class,self-supervised
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要