On The Failure of Invariant Risk Minimization and an Effective fix via Classification Error Control

2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP)(2023)

引用 0|浏览7
暂无评分
摘要
Invariant Risk Minimization is a well-known Domain Generalization framework that has received much attention over the past few years. Invariant Risk Minimization is capable of learning domain-invariant features from multiple domains by finding representation features such that the optimal classifier on top of these features matches all training domains. In this paper, we show that even though the Invariant Risk Minimization algorithm is based on a compelling idea, it is easily vulnerable in a simple toy example where multiple domain-invariant features exist and each possesses a corresponding classifier that is optimal for all domains. Based on this observation, we propose an effective modification of the traditional Invariant Risk Minimization algorithm named Error-Control Invariant Risk Minimization, which allows learning different domain-invariant features via controlling the training classification error, leading to a new algorithm that works well on both our toy synthetic dataset and the real-world datasets.
更多
查看译文
关键词
Domain generalization,multiple domain-invariant features,classification error constraints
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要