Generatively Inferential Co-Training for Unsupervised Domain Adaptation

2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW)(2019)

引用 33|浏览220
暂无评分
摘要
Deep Neural Networks (DNNs) have greatly boosted the performance on a wide range of computer vision and machine learning tasks. Despite such achievements, DNN is hungry for enormous high-quality (HQ) training data, which are expensive and time-consuming to collect. To tackle this challenge, domain adaptation (DA) could help learning a model by leveraging the knowledge of low-quality (LQ) data (i.e., source domain), while generalizing well on label-scarce HQ data (i.e., target domain). However, existing methods have two problems. First, they mainly focus on the high-level feature alignment while neglecting low-level mismatch. Second, there exists a class-conditional distribution shift even features being well aligned. To solve these problems, we propose a novel Generatively Inferential Co-Training (GICT) framework for Unsupervised Domain Adaptation (UDA). GICT is based on cross-domain feature generation and a specifically designed co-training strategy. Feature generation adapts the representation at low level by translating images across domains. Co-training is employed to bridge conditional distribution shift by assigning high-confident pseudo labels on target domain inferred from two distinct classifiers. Extensive experiments on multiple tasks including image classification and semantic segmentation demonstrate the effectiveness of GICT approach.
更多
查看译文
关键词
Domain Adaptation,Co training,Inferential
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要