On the value of label and semantic information in domain generalization

Neural Networks(2023)

引用 2|浏览11
暂无评分
摘要
In this work, we tackle the domain generalization (DG) problem aiming to learn a universal predictor on several source domains and deploy it on an unseen target domain. Many existing DG approaches were mainly motivated by domain adaptation techniques to align the marginal feature distribution but ignored conditional relations and labeling information in the source domains, which are critical to ensure successful knowledge transfer. Although some recent advances started to take advantage of conditional semantic distributions, theoretical justifications were still missing. To this end, we investigate the theoretical guarantee for a successful generalization process by focusing on how to control the target domain error. Our results reveal that to control the target risk, one should jointly control the source errors that are weighted according to label information and align the semantic conditional distributions between different source domains. The theoretical analysis then leads to an efficient algorithm to control the label distributions as well as match the semantic conditional distributions. To verify the effectiveness of our method, we evaluate it against recent baseline algorithms on several benchmarks. We also conducted experiments to verify the performance under label distribution shift to demonstrate the necessity of leveraging the labeling and semantic information. Empirical results show that the proposed method outperforms most of the baseline methods and shows state-of-the-art performances.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要