RULER: What's the Real Context Size of Your Long-Context Language Models?
arxiv(2024)
摘要
The needle-in-a-haystack (NIAH) test, which examines the ability to retrieve
a piece of information (the "needle") from long distractor texts (the
"haystack"), has been widely adopted to evaluate long-context language models
(LMs). However, this simple retrieval-based test is indicative of only a
superficial form of long-context understanding. To provide a more comprehensive
evaluation of long-context LMs, we create a new synthetic benchmark RULER with
flexible configurations for customized sequence length and task complexity.
RULER expands upon the vanilla NIAH test to encompass variations with diverse
types and quantities of needles. Moreover, RULER introduces new task categories
multi-hop tracing and aggregation to test behaviors beyond searching from
context. We evaluate ten long-context LMs with 13 representative tasks in
RULER. Despite achieving nearly perfect accuracy in the vanilla NIAH test, all
models exhibit large performance drops as the context length increases. While
these models all claim context sizes of 32K tokens or greater, only four models
(GPT-4, Command-R, Yi-34B, and Mixtral) can maintain satisfactory performance
at the length of 32K. Our analysis of Yi-34B, which supports context length of
200K, reveals large room for improvement as we increase input length and task
complexity. We open source RULER to spur comprehensive evaluation of
long-context LMs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要