Out-Of-Domain Unlabeled Data Improves Generalization

Amir Hossein Saberi,Amir Najafi, Alireza Heidari, Mohammad Hosein Movasaghinia,Abolfazl Motahari, Babak H. Khalaj

arXiv (Cornell University)(2023)

引用 0|浏览8
暂无评分
摘要
We propose a novel framework for incorporating unlabeled data into semi-supervised classification problems, where scenarios involving the minimization of either i) adversarially robust or ii) non-robust loss functions have been considered. Notably, we allow the unlabeled samples to deviate slightly (in total variation sense) from the in-domain distribution. The core idea behind our framework is to combine Distributionally Robust Optimization (DRO) with self-supervised training. As a result, we also leverage efficient polynomial-time algorithms for the training stage. From a theoretical standpoint, we apply our framework on the classification problem of a mixture of two Gaussians in ℝ^d, where in addition to the m independent and labeled samples from the true distribution, a set of n (usually with n≫ m) out of domain and unlabeled samples are given as well. Using only the labeled data, it is known that the generalization error can be bounded by ∝(d/m)^1/2. However, using our method on both isotropic and non-isotropic Gaussian mixture models, one can derive a new set of analytically explicit and non-asymptotic bounds which show substantial improvement on the generalization error compared to ERM. Our results underscore two significant insights: 1) out-of-domain samples, even when unlabeled, can be harnessed to narrow the generalization gap, provided that the true data distribution adheres to a form of the “cluster assumption", and 2) the semi-supervised learning paradigm can be regarded as a special case of our framework when there are no distributional shifts. We validate our claims through experiments conducted on a variety of synthetic and real-world datasets.
更多
查看译文
关键词
generalization,data,out-of-domain
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要