Relevant or Random: Can LLMs Truly Perform Analogical Reasoning?
arxiv(2024)
摘要
Analogical reasoning is a unique ability of humans to address unfamiliar
challenges by transferring strategies from relevant past experiences. One key
finding in psychology is that compared with irrelevant past experiences,
recalling relevant ones can help humans better handle new tasks.
Coincidentally, the NLP community has also recently found that self-generating
relevant examples in the context can help large language models (LLMs) better
solve a given problem than hand-crafted prompts. However, it is yet not clear
whether relevance is the key factor eliciting such capability, i.e., can LLMs
benefit more from self-generated relevant examples than irrelevant ones? In
this work, we systematically explore whether LLMs can truly perform analogical
reasoning on a diverse set of reasoning tasks. With extensive experiments and
analysis, we show that self-generated random examples can surprisingly achieve
comparable or even better performance, e.g., 4
random biological examples. We find that the accuracy of self-generated
examples is the key factor and subsequently design two improved methods with
significantly reduced inference costs. Overall, we aim to advance a deeper
understanding of LLM analogical reasoning and hope this work stimulates further
research in the design of self-generated contexts.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要