Self-Supervised Pretraining Improves Self-Supervised Pretraining

2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022)(2022)

Cited 37|Views85
No score
Abstract
While self-supervised pretraining has proven beneficial for many computer vision tasks, it requires expensive and lengthy computation, large amounts of data, and is sensitive to data augmentation. Prior work demonstrates that models pretrained on datasets dissimilar to their target data, such as chest X-ray models trained on ImageNet, under-perform models trained from scratch. Users that lack the resources to pretrain must use existing models with lower performance. This paper explores Hierarchical PreTraining (HPT), which decreases convergence time and improves accuracy by initializing the pretraining process with an existing pretrained model. Through experimentation on 16 diverse vision datasets, we show HPT converges up to 80x faster, improves accuracy across tasks, and improves the robustness of the self-supervised pretraining process to changes in the image augmentation policy or amount of pre-training data. Taken together, HPT provides a simple framework for obtaining better pretrained representations with less computational resources.
More
Translated text
Key words
Transfer,Few-shot,Semi- and Un- supervised Learning Object Detection/Recognition/Categorization,Remote Sensing,Vision for Aerial/Drone/Underwater/Ground Vehicles
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined