DenseFormer: Enhancing Information Flow in Transformers via Depth Weighted Averaging
CoRR(2024)
摘要
The transformer architecture from Vaswani et al. (2017) is now ubiquitous
across application domains, from natural language processing to speech
processing and image understanding. We propose DenseFormer, a simple
modification to the standard architecture that improves the perplexity of the
model without increasing its size – adding a few thousand parameters for
large-scale models in the 100B parameters range. Our approach relies on an
additional averaging step after each transformer block, which computes a
weighted average of current and past representations – we refer to this
operation as Depth-Weighted-Average (DWA). The learned DWA weights exhibit
coherent patterns of information flow, revealing the strong and structured
reuse of activations from distant layers. Experiments demonstrate that
DenseFormer is more data efficient, reaching the same perplexity of much deeper
transformer models, and that for the same perplexity, these new models
outperform transformer baselines in terms of memory efficiency and inference
time.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要