Fewer Truncations Improve Language Modeling
arxiv(2024)
摘要
In large language model training, input documents are typically concatenated
together and then split into sequences of equal length to avoid padding tokens.
Despite its efficiency, the concatenation approach compromises data integrity
– it inevitably breaks many documents into incomplete pieces, leading to
excessive truncations that hinder the model from learning to compose logically
coherent and factually consistent content that is grounded on the complete
context. To address the issue, we propose Best-fit Packing, a scalable and
efficient method that packs documents into training sequences through
length-aware combinatorial optimization. Our method completely eliminates
unnecessary truncations while retaining the same training efficiency as
concatenation. Empirical results from both text and code pre-training show that
our method achieves superior performance (e.g., relatively +4.7
comprehension; +16.8
and reduces closed-domain hallucination effectively by up to 58.3
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要