On f-Divergence Principled Domain Adaptation: An Improved Framework
CoRR(2024)
摘要
Unsupervised domain adaptation (UDA) plays a crucial role in addressing
distribution shifts in machine learning. In this work, we improve the
theoretical foundations of UDA proposed by Acuna et al. (2021) by refining
their f-divergence-based discrepancy and additionally introducing a new
measure, f-domain discrepancy (f-DD). By removing the absolute value function
and incorporating a scaling parameter, f-DD yields novel target error and
sample complexity bounds, allowing us to recover previous KL-based results and
bridging the gap between algorithms and theory presented in Acuna et al.
(2021). Leveraging a localization technique, we also develop a fast-rate
generalization bound. Empirical results demonstrate the superior performance of
f-DD-based domain learning algorithms over previous works in popular UDA
benchmarks.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要