Preserving domain private information via mutual information maximization

NEURAL NETWORKS(2024)

引用 4|浏览1
暂无评分
摘要
Recent advances in unsupervised domain adaptation have shown that mitigating the domain divergence by extracting the domain-invariant features could significantly improve the generalization of a model with respect to a new data domain. However, current methodologies often neglect to retain domain private information, which is the unique information inherent to the unlabeled new domain, compromising generalization. This paper presents a novel method that utilizes mutual information to protect this domain-specific information, ensuring that the latent features of the unlabeled data not only remain domain-invariant but also reflect the unique statistics of the unlabeled domain. We show that simultaneous maximization of mutual information and reduction of domain divergence can effectively preserve domain-private information. We further illustrate that a neural estimator can aptly estimate the mutual information between the unlabeled input space and its latent feature space. Both theoretical analysis and empirical results validate the significance of preserving such unique information of the unlabeled domain for cross-domain generalization. Comparative evaluations reveal our method's superiority over existing state -of -the -art techniques across multiple benchmark datasets.
更多
查看译文
关键词
Deep learning,Domain adaptation,Computer vision,Information theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要