Large Language Models Can Learn Temporal Reasoning
CoRR(2024)
摘要
Large language models (LLMs) learn temporal concepts from the co-occurrence
of related tokens in a sequence. Compared with conventional text generation,
temporal reasoning, which reaches a conclusion based on mathematical, logical
and commonsense knowledge, is more challenging. In this paper, we propose
TempGraph-LLM, a new paradigm towards text-based temporal reasoning. To be
specific, we first teach LLMs to translate the context into a temporal graph. A
synthetic dataset, which is fully controllable and requires minimal
supervision, is constructed for pre-training on this task. We prove in
experiments that LLMs benefit from the pre-training on other tasks. On top of
that, we guide LLMs to perform symbolic reasoning with the strategies of Chain
of Thoughts (CoTs) bootstrapping and special data augmentation. We observe that
CoTs with symbolic reasoning bring more consistent and reliable results than
those using free text.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要