Discriminative Manifold Distribution Alignment for Domain Adaptation

IEEE Transactions on Systems, Man, and Cybernetics: Systems(2023)

引用 35|浏览69
暂无评分
摘要
Domain adaptation (DA) aims to accomplish tasks on unlabeled target data by learning and transferring knowledge from related source domains. In order to learn a discriminative and domain-invariant model, a critical step is to align source and target data well and thus reduce their distribution divergence. But existing DA methods mainly align the global feature distributions in distorted original space, which neglects their fine-grained local information and intrinsic geometrical structures. Moreover, some methods rely heavily on pseudo-labels to align features, which may undermine adaptation performance and lead to negative transfer. We propose an efficient discriminative manifold distribution alignment (DMDA) approach, which improves feature transferability by aligning both global and local distributions and refines a discriminative model by learning geometrical structures in manifold space. In addition, when learning geometrical structures, DMDA is exempt from the uncertainty and error brought by pseudo-labels of a target domain. It is very concise and efficient to be implemented by integrating learning steps and obtaining solutions directly. Extensive experiments on 68 DA tasks from seven benchmarks and subsequent analyses show that DMDA outperforms the compared methods in both classification accuracy and time efficiency, thus representing a significant advance in the DA field.
更多
查看译文
关键词
Manifolds,Task analysis,Data models,Adaptation models,Training,Uncertainty,Generative adversarial networks,Distribution alignment,domain adaptation (DA),image classification,manifold learning,transfer learning (TL)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要