Explore Spurious Correlations at the Concept Level in Language Models for Text Classification
CoRR(2023)
摘要
Language models (LMs) have achieved notable success in numerous NLP tasks,
employing both fine-tuning and in-context learning (ICL) methods. While
language models demonstrate exceptional performance, they face robustness
challenges due to spurious correlations arising from imbalanced label
distributions in training data or ICL exemplars. Previous research has
primarily concentrated on word, phrase, and syntax features, neglecting the
concept level, often due to the absence of concept labels and difficulty in
identifying conceptual content in input texts. This paper introduces two main
contributions. First, we employ ChatGPT to assign concept labels to texts,
assessing concept bias in models during fine-tuning or ICL on test data. We
find that LMs, when encountering spurious correlations between a concept and a
label in training or prompts, resort to shortcuts for predictions. Second, we
introduce a data rebalancing technique that incorporates ChatGPT-generated
counterfactual data, thereby balancing label distribution and mitigating
spurious correlations. Our method's efficacy, surpassing traditional token
removal approaches, is validated through extensive testing.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要