Self-Supervised LearningSelf-supervised learning is essentially a method of unsupervised learning, and we will set up a "Pretext" and construct Pesdeo Labels to train the network model according to some characteristics of data. The self-supervised model can be used as a pre-training model for other learning tasks to provide a better initial training area. Therefore, self-supervised learning can also be regarded as a general visual representation for learning images.
ICLR, (2020)
We have convincing evidence that sentence order prediction is a more consistently-useful learning task that leads to better language representations, we hypothesize that there could be more dimensions not yet captured by the current self-supervised training losses that could crea...
Cited by619BibtexViews430Links
0
0
ICLR, (2020)
We evaluate models on two benchmarks: TIMIT is a 5h dataset with phoneme labels and Wall Street Journal is a 81h dataset for speech recognition
Cited by51BibtexViews165Links
0
0
Ravanelli Mirco, Zhong Jianyuan, Pascual Santiago, Swietojanski Pawel, Monteiro Joao, Trmal Jan,Bengio Yoshua
ICASSP, pp.6989-6993, (2020)
The proposed problemagnostic speech encoder+ architecture is based on an online speech distortion module, a convolutional encoder coupled with a quasi-recurrent neural network layer, and a set of workers solving self-supervised problems
Cited by26BibtexViews167Links
0
0
Similar approaches are common in NLP, we demonstrate that this approach can be a surprisingly strong baseline for semi-supervised learning in computer vision, outperforming the state-of-the-art by a large margin
Cited by25BibtexViews364Links
0
0
NeurIPS 2020, (2020)
We presented wav2vec 2.0, a framework for self-supervised learning of speech representations which masks latent representations of the raw waveform and solves a contrastive task over quantized speech representations
Cited by19BibtexViews368Links
0
0
AAAI, pp.12362-12369, (2020)
We presented a Tracklet Self-Supervised Learning method for unsupervised image and video person re-id
Cited by9BibtexViews152Links
0
0
We presented BADGR, an end-to-end learning-based mobile robot navigation system that can be trained entirely with selfsupervised, off-policy data gathered in real-world environments, without any simulation or human supervision, and can improve as it gathers more data
Cited by8BibtexViews215Links
0
0
NeurIPS 2020, (2020)
We demonstrate that these self-supervised representations learn occlusion invariance by employing an aggressive cropping strategy which heavily relies on an object-centric dataset bias
Cited by6BibtexViews185Links
0
0
We first introduce various basic supervised learning pretext tasks for graphs and present detailed empirical study to understand when and why SSL works for graph neural networks and which strategy can better work with GNNs
Cited by5BibtexViews236Links
0
0
There exist several comprehensive reviews related to Pre-trained Language Models, Generative Adversarial Networks, Autoencoder and contrastive learning for visual representation
Cited by5BibtexViews4110Links
0
0
Shaolei Wang,Wanxiang Che,Qi Liu, Pengda Qin,Ting Liu,William Yang Wang
national conference on artificial intelligence, (2020)
Experimental results on the commonly used English Switchboard test set show that our approach can achieve competitive performance compared to the previous systems by using less than 1% of the training data
Cited by5BibtexViews198Links
0
0
Zhao Nanxuan,Wu Zhirong, Lau Rynson W. H.,Lin Stephen
We identified a strong error pattern among self-supervised models in their failure to localize foreground objects
Cited by3BibtexViews149Links
0
0
ICCV, pp.3827-3837, (2019)
We showed how together they give a simple and efficient model for depth estimation, which can be trained with monocular video data, stereo data, or mixed monocular and stereo data
Cited by252BibtexViews220Links
0
0
CVPR, (2019)
As part of our study, we drastically boost the performance of previously proposed techniques and outperform previously published state-of-the-art results by a large margin
Cited by188BibtexViews129Links
0
0
arXiv: Computer Vision and Pattern Recognition, (2019): 1-1
This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos
Cited by148BibtexViews144Links
0
0
Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov,Lucas Beyer
International Conference on Computer Vision, (2019): 1476-1485
We further showed that S4L methods are complementary to existing semisupervision techniques, and Mix Of All Models, our proposed combination of those, leads to state-of-the-art performance
Cited by114BibtexViews149Links
0
0
Fangchang Ma, Guilherme Venturelli Cavalheiro,Sertac Karaman
international conference on robotics and automation, (2019)
This framework requires only sequences of RGB and sparse depth images, and outperforms a number of existing solutions trained with semi-dense annotations
Cited by105BibtexViews161Links
0
0
CVPR, (2019): 6629-6638
In this paper we present two novel approaches, Reinforced Cross-Modal Matching and Supervised Imitation Learning, which combine the strength of reinforcement learning and self-supervised imitation learning for the visionlanguage navigation task
Cited by104BibtexViews224Links
0
0
Dan Hendrycks, Mantas Mazeika, Saurav Kadavath,Dawn Song
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), (2019): 15637-15648
We found large improvements in robustness to adversarial examples, label corruption, and common input corruptions
Cited by92BibtexViews82Links
0
0
Keywords
Supervised LearningSelf-supervised LearningComputer VisionSemi-supervised LearningLearning Artificial IntelligencePose EstimationDeep LearningFace RecognitionFeature ExtractionImage Reconstruction
Authors
Sergey Levine
Paper 7
Abhinav Gupta
Paper 5
Xiaohua Zhai
Paper 3
William Yang Wang
Paper 3
Andrew Zisserman
Paper 3
Sebastian Thrun
Paper 3
Dacheng Tao
Paper 3
Michael Auli
Paper 2
Neil Houlsby
Paper 2
Austin Reiter
Paper 2