Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate
arxiv(2024)
摘要
Generalization remains a central challenge in machine learning. In this work,
we propose Learning from Teaching (LoT), a novel regularization technique for
deep neural networks to enhance generalization. Inspired by the human ability
to capture concise and abstract patterns, we hypothesize that generalizable
correlations are expected to be easier to imitate. LoT operationalizes this
concept to improve generalization of the main model with auxiliary student
learners. The student learners are trained by the main model and, in turn,
provide feedback to help the main model capture more generalizable and imitable
correlations. Our experimental results across several domains, including
Computer Vision, Natural Language Processing, and methodologies like
Reinforcement Learning, demonstrate that the introduction of LoT brings
significant benefits compared to training models on the original dataset. The
results suggest the effectiveness and efficiency of LoT in identifying
generalizable information at the right scales while discarding spurious data
correlations, thus making LoT a valuable addition to current machine learning.
Code is available at https://github.com/jincan333/LoT.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要