Semi-Supervised Generalized Source-Free Domain Adaptation (SSG-SFDA).

IJCNN(2023)

引用 0|浏览4
暂无评分
摘要
Continual learning aims to learn on a sequence of new tasks while maintaining the performance on previous tasks. Source-free domain adaptation (SFDA), which adapts a pre-trained source model to a target domain, is useful in protecting the source domain data privacy. Generalized SFDA (G-SFDA) combines continual learning and SFDA to achieve outstanding performance on both the source and the target domains. This paper proposes semi-supervised G-SFDA (SSG-SFDA) for domain incremental learning, where a pre-trained source model (instead of the source data), few labeled target data, and plenty of unlabeled target data, are available. The goal is to achieve good performance on all domains. To cope with domain-ID agnostic, SSG-SFDA trains a conditional variational auto-encoder (CVAE) for each domain to learn its feature distribution, and a domain discriminator using virtual shallow features generated by CVAE to estimate the domain ID. To cope with catastrophic forgetting, SSG-SFDA uses soft domain attention to improve the sparse domain attention in G-SFDA. To cope with insufficient labeled target data, SSG-SFDA uses MixMatch to augment the unlabeled target data and better exploit the few labeled target data. Experiments on three datasets demonstrated the effectiveness of SSG-SFDA.
更多
查看译文
关键词
Continual learning, source-free domain adaptation, semi-supervised learning, transfer learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要