Learning explicitly transferable representations for domain adaptation.

Neural Networks(2020)

引用 16|浏览95
暂无评分
摘要
Domain adaptation tackles the problem where the training source domain and the test target domain have distinctive data distributions, and therefore improves the generalization ability of deep models. The very popular mechanism of domain adaptation is to learn a new feature representation which is supposed to be domain-invariant, so that the classifiers trained on the source domain can be directly applied to the target domain. However, recent work reveals that learning new feature representations may potentially deteriorate the adaptability of the original features and increase the expected error bound of the target domain. To address this, we propose to adapt classifiers rather than features. Specifically, we fill in the distribution gaps between domains by some additional transferable representations which are explicitly learned from the original features while keeping the original features unchanged. In addition, we argue that transferable representations should be able to be translated from one domain to the other with appropriate mappings. At the same time, we introduce conditional entropy to mitigate the semantic confusion during mapping. Experiments on both standard and large-scale datasets verify that our method is able to achieve the new state-of-the-art results on unsupervised domain adaptation.
更多
查看译文
关键词
Domain adaptation,Transfer learning,Transferable representation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要