The Garden of Forking Paths: Observing Dynamic Parameters Distribution in Large Language Models
arxiv(2024)
摘要
A substantial gap persists in understanding the reasons behind the
exceptional performance of the Transformer architecture in NLP. A particularly
unexplored area involves the mechanistic description of how the distribution of
parameters evolves over time during training. In this work we suggest that
looking at the time evolution of the statistic distribution of model
parameters, and specifically at bifurcation effects, can help understanding the
model quality, potentially reducing training costs and evaluation efforts and
empirically showing the reasons behind the effectiveness of weights
sparsification.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要