Hacking Task Confounder in Meta-Learning
arxiv(2023)
摘要
Meta-learning enables rapid generalization to new tasks by learning knowledge
from various tasks. It is intuitively assumed that as the training progresses,
a model will acquire richer knowledge, leading to better generalization
performance. However, our experiments reveal an unexpected result: there is
negative knowledge transfer between tasks, affecting generalization
performance. To explain this phenomenon, we conduct Structural Causal Models
(SCMs) for causal analysis. Our investigation uncovers the presence of spurious
correlations between task-specific causal factors and labels in meta-learning.
Furthermore, the confounding factors differ across different batches. We refer
to these confounding factors as “Task Confounders". Based on these findings,
we propose a plug-and-play Meta-learning Causal Representation Learner
(MetaCRL) to eliminate task confounders. It encodes decoupled generating
factors from multiple tasks and utilizes an invariant-based bi-level
optimization mechanism to ensure their causality for meta-learning. Extensive
experiments on various benchmark datasets demonstrate that our work achieves
state-of-the-art (SOTA) performance.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要