The Information of Large Language Model Geometry
CoRR(2024)
摘要
This paper investigates the information encoded in the embeddings of large
language models (LLMs). We conduct simulations to analyze the representation
entropy and discover a power law relationship with model sizes. Building upon
this observation, we propose a theory based on (conditional) entropy to
elucidate the scaling law phenomenon. Furthermore, we delve into the
auto-regressive structure of LLMs and examine the relationship between the last
token and previous context tokens using information theory and regression
techniques. Specifically, we establish a theoretical connection between the
information gain of new tokens and ridge regression. Additionally, we explore
the effectiveness of Lasso regression in selecting meaningful tokens, which
sometimes outperforms the closely related attention weights. Finally, we
conduct controlled experiments, and find that information is distributed across
tokens, rather than being concentrated in specific "meaningful" tokens alone.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要