Learning to Plan and Generate Text with Citations
arxiv(2024)
摘要
The increasing demand for the deployment of LLMs in information-seeking
scenarios has spurred efforts in creating verifiable systems, which generate
responses to queries along with supporting evidence. In this paper, we explore
the attribution capabilities of plan-based models which have been recently
shown to improve the faithfulness, grounding, and controllability of generated
text. We conceptualize plans as a sequence of questions which serve as
blueprints of the generated content and its organization. We propose two
attribution models that utilize different variants of blueprints, an
abstractive model where questions are generated from scratch, and an extractive
model where questions are copied from the input. Experiments on long-form
question-answering show that planning consistently improves attribution
quality. Moreover, the citations generated by blueprint models are more
accurate compared to those obtained from LLM-based pipelines lacking a planning
component.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要