LTF: A Label Transformation Framework for Correcting Target Shift
ICML(2020)
摘要
Distribution shift is a major obstacle to the deployment of current deep learning models on realworld problems. Let Y be the target (label) and X the predictors (features). We focus on one type of distribution shift, target shift, where the marginal distribution of the target variable PY changes but the conditional distribution PX|Y does not. Existing methods estimate the density ratio between the sourceand target-domain label distributions by density matching. However, these methods are either computationally infeasible for large-scale data or restricted to shift correction for discrete labels. In this paper, we propose an end-to-end Label Transformation Framework (LTF) for correcting target shift, which implicitly models the shift of PY and the conditional distribution PX|Y using neural networks. Thanks to the flexibility of deep networks, our framework can handle continuous, discrete, and even multidimensional labels in a unified way and is scalable to large data. Moreover, for high dimensional X , such as images, we find that the redundant information in X severely degrades the estimation accuracy. To remedy this issue, we propose to match the distribution implied by our generative model and the target-domain distribution in a low-dimensional feature space that discards information irrelevant to Y . Both theoretical and empirical studies demonstrate the superiority of our method over previous approaches. UBTECH Sydney AI Centre, School of Computer Science, Faculty of Engineering, The University of Sydney, Darlington, NSW 2008, Australia School of Mathematics and Statistics, The University of Melbourne Department of Philosophy, Carnegie Mellon University. Correspondence to: Jiaxian Guo . Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s).
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络