Meaningful Learning: Advancing Abstract Reasoning in Large Language Models via Generic Fact Guidance
arxiv(2024)
摘要
Large language models (LLMs) have developed impressive performance and strong
explainability across various reasoning scenarios, marking a significant stride
towards mimicking human-like intelligence. Despite this, when tasked with
simple questions supported by a generic fact, LLMs often fail to provide
consistent and precise answers, indicating a deficiency in abstract reasoning
abilities. This has sparked a vigorous debate about whether LLMs are genuinely
reasoning or merely memorizing. In light of this, we design a preliminary study
to quantify and delve into the abstract reasoning abilities of existing LLMs.
Our findings reveal a substantial discrepancy between their general reasoning
and abstract reasoning performances. To relieve this problem, we tailor an
abstract reasoning dataset (AbsR) together with a meaningful learning paradigm
to teach LLMs how to leverage generic facts for reasoning purposes. The results
show that our approach not only boosts the general reasoning performance of
LLMs but also makes considerable strides towards their capacity for abstract
reasoning, moving beyond simple memorization or imitation to a more nuanced
understanding and application of generic facts.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要