Deep Co-Training with Task Decomposition for Semi-Supervised Domain Adaptation.

ICCV(2021)

引用 75|浏览124
暂无评分
摘要
Semi-supervised domain adaptation (SSDA) aims to adapt models from a labeled source domain to a different but related target domain, from which unlabeled data and a small set of labeled data are provided. In this paper we propose a new approach for SSDA, which is to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains. We show that these two sub-tasks yield very different classifiers and thus naturally fits into the well established co-training framework, in which the two classifiers exchange their high confident predictions to iteratively "teach each other" so that both classifiers can excel in the target domain. We call our approach Deep Co-Training with Task Decomposition (DeCoTa). DeCoTa requires no adversarial training, making it fairly easy to implement. DeCoTa achieves state-of-the-art results on several SSDA datasets, outperforming the prior art by a notable 4% margin on DomainNet.
更多
查看译文
关键词
Co-training,Machine learning,Small set,Computer science,Artificial intelligence,Domain adaptation,Labeled data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要