Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
arxiv(2023)
Abstract
The dynamic nature of knowledge in an ever-changing world presents challenges
for language models trained on static data; the model in the real world often
requires not only acquiring new knowledge but also overwriting outdated
information into updated ones. To study the ability of language models for
these time-dependent dynamics in human language, we introduce a novel task,
EvolvingQA, a temporally evolving question-answering benchmark designed for
training and evaluating LMs on an evolving Wikipedia database. The construction
of EvolvingQA is automated with our pipeline using large language models. We
uncover that existing continual learning baselines suffer from updating and
removing outdated knowledge. Our analysis suggests that models fail to rectify
knowledge due to small weight gradients. In addition, we elucidate that
language models particularly struggle to reflect the change of numerical or
temporal information. Our work aims to model the dynamic nature of real-world
information, suggesting faithful evaluations of the evolution-adaptability of
language models.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined