"According to ...": Prompting Language Models Improves Quoting from Pre-Training Data
CoRR(2023)
摘要
Large Language Models (LLMs) may hallucinate and generate fake information,
despite pre-training on factual data. Inspired by the journalistic device of
"according to sources", we propose according-to prompting: directing LLMs to
ground responses against previously observed text. To quantify this grounding,
we propose a novel evaluation metric (QUIP-Score) that measures the extent to
which model-produced answers are directly found in underlying text corpora. We
illustrate with experiments on three corpora (Wikipedia, PubMed, and the U.S.
legal tax code) that these prompts improve grounding under our metrics, with
the additional benefit of often improving end-task performance. Furthermore,
prompts that ask the model to decrease grounding (or to ground to other
corpora) indeed decrease QUIP-Score, indicating the ability of LLMs to increase
or decrease grounded generations on request.
更多查看译文
关键词
prompting language models,quoting,pre-training
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要