Self-Supervised Learning for Videos: A Survey

arxiv(2023)

引用 14|浏览134
暂无评分
摘要
The remarkable success of deep learning in various domains relies on the availability of large-scale annotated datasets. However, obtaining annotations is expensive and requires great effort, which is especially challenging for videos. Moreover, the use of human-generated annotations leads to models with biased learning and poor domain generalization and robustness. As an alternative, self-supervised learning provides a way for representation learning that does not require annotations and has shown promise in both image and video domains. In contrast to the image domain, learning video representations are more challenging due to the temporal dimension, bringing in motion and other environmental dynamics. This also provides opportunities for video-exclusive ideas that advance self-supervised learning in the video and multimodal domains. In this survey, we provide a review of existing approaches on self-supervised learning focusing on the video domain. We summarize these methods into four different categories based on their learning objectives: (1) pretext tasks, (2) generative learning, (3) contrastive learning, and (4) cross-modal agreement. We further introduce the commonly used datasets, downstream evaluation tasks, insights into the limitations of existing works, and the potential future directions in this area.
更多
查看译文
关键词
Self-supervised learning,deep learning,video understanding,zero-shot learning,representation learning,multimodal learning,visual-language models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要