Batch Universal Prediction
CoRR(2024)
摘要
Large language models (LLMs) have recently gained much popularity due to
their surprising ability at generating human-like English sentences. LLMs are
essentially predictors, estimating the probability of a sequence of words given
the past. Therefore, it is natural to evaluate their performance from a
universal prediction perspective. In order to do that fairly, we introduce the
notion of batch regret as a modification of the classical average regret, and
we study its asymptotical value for add-constant predictors, in the case of
memoryless sources and first-order Markov sources.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要