Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?

Mingyu Jin,Qinkai Yu, Jingyuan Huang, Qingcheng Zeng,Zhenting Wang,Wenyue Hua, Haiyan Zhao,Kai Mei, Yanda Meng,Kaize Ding,Fan Yang,Mengnan Du,Yongfeng Zhang

arxiv(2024)

引用 0|浏览3
暂无评分
摘要
This paper studies the phenomenon that different concepts are learned in different layers of large language models, i.e. more difficult concepts are fully acquired with deeper layers. We define the difficulty of concepts by the level of abstraction, and here it is crudely categorized by factual, emotional, and inferential. Each category contains a spectrum of tasks, arranged from simple to complex. For example, within the factual dimension, tasks range from lie detection to categorizing mathematical problems. We employ a probing technique to extract representations from different layers of the model and apply these to classification tasks. Our findings reveal that models tend to efficiently classify simpler tasks, indicating that these concepts are learned in shallower layers. Conversely, more complex tasks may only be discernible at deeper layers, if at all. This paper explores the implications of these findings for our understanding of model learning processes and internal representations. Our implementation is available at .
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要