ToM-LM: Delegating Theory of Mind Reasoning to External Symbolic Executors in Large Language Models
arxiv(2024)
摘要
Theory of Mind (ToM) refers to the ability of individuals to attribute mental
states to others. While Large Language Models (LLMs) have shown some promise
with ToM ability, they still struggle with complex ToM reasoning. Our approach
leverages an external symbolic executor, specifically the SMCDEL model checker,
and fine-tuning to improve the ToM reasoning ability of LLMs. In our approach,
an LLM is first fine-tuned through pairs of natural language and symbolic
formulation representation of ToM problems and is then instructed to generate
the symbolic formulation with a one-shot in-context example. The generated
symbolic formulation is then executed by the SMCDEL model checker to perform
transparent and verifiable ToM reasoning and give the final result. We
demonstrate that our approach, ToM-LM, shows a significant improvement over all
the constructed baselines. Our study proposes a novel view about externalizing
a particular component of ToM reasoning, mainly reasoning about beliefs, and
suggests generalizing it to other aspects of ToM reasoning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要