Learning Generalizable Visual Representations via Self-Supervised Information Bottleneck

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

引用 0|浏览1
暂无评分
摘要
Numerous approaches have recently emerged in the realm of self-supervised visual representation learning. While these methods have demonstrated empirical success, a theoretical foundation that understands and unifies these diverse techniques remains to be established. In this work, we draw inspiration from the principles underlying brain-based learning and propose a new method named self-supervised information bottleneck. Our method aims to maximize the mutual information between representations of views derived from the same image, while maintaining a minimal mutual information between the view and its corresponding representation at the same time. The brain-inspired method provides a unified information-theoretic perspective on various self-supervised approaches. This unified framework also empowers the model to learn generalizable visual representations for diverse downstream tasks and data distributions, achieving state-of-the-art performance across a wide variety of image and video tasks.
更多
查看译文
关键词
Self-Supervised Learning,Visual Representation Learning,Generalizable Representation Learning,Mutual Information,Information Bottleneck
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要