XL^2Bench: A Benchmark for Extremely Long Context Understanding with Long-range Dependencies
arxiv(2024)
摘要
Large Language Models (LLMs) have demonstrated remarkable performance across
diverse tasks but are constrained by their small context window sizes. Various
efforts have been proposed to expand the context window to accommodate even up
to 200K input tokens. Meanwhile, building high-quality benchmarks with much
longer text lengths and more demanding tasks to provide comprehensive
evaluations is of immense practical interest to facilitate long context
understanding research of LLMs. However, prior benchmarks create datasets that
ostensibly cater to long-text comprehension by expanding the input of
traditional tasks, which falls short to exhibit the unique characteristics of
long-text understanding, including long dependency tasks and longer text length
compatible with modern LLMs' context window size. In this paper, we introduce a
benchmark for extremely long context understanding with long-range
dependencies, XL^2Bench, which includes three scenarios: Fiction Reading,
Paper Reading, and Law Reading, and four tasks of increasing complexity: Memory
Retrieval, Detailed Understanding, Overall Understanding, and Open-ended
Generation, covering 27 subtasks in English and Chinese. It has an average
length of 100K+ words (English) and 200K+ characters (Chinese). Evaluating six
leading LLMs on XL^2Bench, we find that their performance significantly lags
behind human levels. Moreover, the observed decline in performance across both
the original and enhanced datasets underscores the efficacy of our approach to
mitigating data contamination.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要