Distributionally Invariant Learning: Rationalization and Practical Algorithms

arxiv(2022)

引用 0|浏览35
暂无评分
摘要
The invariance property across environments is at the heart of invariant learning methods for the Out-of-Distribution (OOD) Generalization problem. Although intuitively reasonable, strong assumptions on the availability and quality of environments have to be made for the learnability of the strict invariance property. Recently, to relax the requirements for environments empirically, some works propose to learn pseudo-environments for invariant learning. However, it could be misleading when pursuing strict invariance under latent heterogeneity, since the underlying invariance could have been violated during the pseudo-environment learning procedure. To this end, we come up with the distributional invariance property as a relaxed alternative to the strict invariance, which considers the invariance only among sub-populations down to a prescribed scale and allows a certain degree of variation. We reformulate the invariant learning problem under latent heterogeneity into a relaxed form that pursues the distributional invariance, based on which we propose our novel Distributionally Invariant Learning (DIL) framework as well as two implementations named DIL-MMD and DIL-KL. Theoretically, we provide the guarantees for the distributional invariance as well as bounds of the generalization error gap. Extensive experimental results validate the effectiveness of our proposed algorithms.
更多
查看译文
关键词
rationalization,learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要