Proxy Methods for Domain Adaptation
International Conference on Artificial Intelligence and Statistics(2024)
摘要
We study the problem of domain adaptation under distribution shift, where the
shift is due to a change in the distribution of an unobserved, latent variable
that confounds both the covariates and the labels. In this setting, neither the
covariate shift nor the label shift assumptions apply. Our approach to
adaptation employs proximal causal learning, a technique for estimating causal
effects in settings where proxies of unobserved confounders are available. We
demonstrate that proxy variables allow for adaptation to distribution shift
without explicitly recovering or modeling latent variables. We consider two
settings, (i) Concept Bottleneck: an additional ”concept” variable is
observed that mediates the relationship between the covariates and labels; (ii)
Multi-domain: training data from multiple source domains is available, where
each source domain exhibits a different distribution over the latent
confounder. We develop a two-stage kernel estimation approach to adapt to
complex distribution shifts in both settings. In our experiments, we show that
our approach outperforms other methods, notably those which explicitly recover
the latent confounder.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要