Semi-supervised LearningSemi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Semi-supervised learning falls between unsupervised learning (with no labeled training data) and supervised learning (with only labeled training data).
The results of our experiments on the three benchmark datasets, MNIST, Street View House Numbers, and CIFAR-10 indicate that virtual adversarial training is an effective method for both supervised and semisupervised learning
In addition we developed an architecture to improve landmark estimation using auxiliary attributes such as class labels by backpropagating errors through the landmark localization components of the model
In this paper we have designed a new supervised learning algorithm for improving the classifier performance on Intrusion detection datasets by investigating a divide-and-conquer strategy in which unlabeled samples with their predicted labels are categorized according to the magni...
Our training still operates on a single network, but the predictions made on different epochs correspond to an ensemble prediction of a large number of individual sub-networks because of dropout regularization
The experiments in this paper were conducted with https://github.com/DoctorTeeth/supergan, which borrows heavily from https://github.com/carpedm20/DCGANtensorflow and which contains more details about the experimental setup
We showed how a simultaneous unsupervised learning task improves convolutional neural networks and multi-layer perceptrons networks reaching the state-of-the-art in various semi-supervised learning tasks
Various approaches have been tried over the years, but according to the results on the challenging Pascal VOC 2012 segmentation benchmark, the best performing methods all use some kind of Deep Convolutional Neural Network
We presented an approach that can expand a translation model extracted from a sentence-aligned, bilingual corpus using a large amount of unstructured, monolingual data in both source and target languages, which leads to improvements of 1.4 and 1.2 BLEU points over strong baseline...