Untangle the KNOT: Interweaving Conflicting Knowledge and Reasoning Skills in Large Language Models
arxiv(2024)
摘要
Providing knowledge documents for large language models (LLMs) has emerged as
a promising solution to update the static knowledge inherent in their
parameters. However, knowledge in the document may conflict with the memory of
LLMs due to outdated or incorrect knowledge in the LLMs' parameters. This leads
to the necessity of examining the capability of LLMs to assimilate supplemental
external knowledge that conflicts with their memory. While previous studies
have explained to what extent LLMs extract conflicting knowledge from the
provided text, they neglect the necessity to reason with conflicting knowledge.
Furthermore, there lack a detailed analysis on strategies to enable LLMs to
resolve conflicting knowledge via prompting, decoding strategy, and supervised
fine-tuning. To address these limitations, we construct a new dataset, dubbed
KNOT, for knowledge conflict resolution examination in the form of question
answering. KNOT facilitates in-depth analysis by dividing reasoning with
conflicting knowledge into three levels: (1) Direct Extraction, which directly
extracts conflicting knowledge to answer questions. (2) Explicit Reasoning,
which reasons with conflicting knowledge when the reasoning path is explicitly
provided in the question. (3) Implicit Reasoning, where reasoning with
conflicting knowledge requires LLMs to infer the reasoning path independently
to answer questions. We also conduct extensive experiments on KNOT to establish
empirical guidelines for LLMs to utilize conflicting knowledge in complex
circumstances. Dataset and associated codes can be accessed at
https://github.com/THU-KEG/KNOT .
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要