Deductive Closure Training of Language Models for Coherence, Accuracy, and Updatability
CoRR(2024)
摘要
While language models (LMs) can sometimes generate factually correct text and
estimate truth values of individual claims, these generally do not reflect a
globally coherent, manipulable model of the world. As a consequence, current
LMs also generate incorrect or nonsensical content, and are difficult to edit
and bring up to date. We present a method called Deductive Closure Training
(DCT) that uses LMs themselves to identify implications of (and contradictions
within) the text that they generate, yielding an efficient self-supervised
procedure for improving LM factuality. Given a collection of seed documents,
DCT prompts LMs to generate additional text implied by these documents, reason
globally about the correctness of this generated text, and finally fine-tune on
text inferred to be correct. Given seed documents from a trusted source, DCT
provides a tool for supervised model updating; if seed documents are sampled
from the LM itself, DCT enables fully unsupervised fine-tuning for improved
coherence and accuracy. Across the CREAK, MQUaKE, and Reversal Curse datasets,
supervised DCT improves LM fact verification and text generation accuracy by
3-26
These results show that LMs' reasoning capabilities during inference can be
leveraged during training to improve their reliability.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要