SoLA: Solver-Layer Adaption of LLM for Better Logic Reasoning
CoRR(2024)
摘要
Considering the challenges faced by large language models (LLMs) on logical
reasoning, prior efforts have sought to transform problem-solving through tool
learning. While progress has been made on small-scale problems, solving
industrial cases remains difficult due to their large scale and intricate
expressions. In this paper, we propose a novel solver-layer adaptation (SoLA)
method, where we introduce a solver as a new layer of the LLM to differentially
guide solutions towards satisfiability. In SoLA, LLM aims to comprehend the
search space described in natural language and identify local solutions of the
highest quality, while the solver layer focuses solely on constraints not
satisfied by the initial solution. Leveraging MaxSAT as a bridge, we define
forward and backward transfer gradients, enabling the final model to converge
to a satisfied solution or prove unsatisfiability. The backdoor theory ensures
that SoLA can obtain accurate solutions within polynomial loops. We evaluate
the performance of SoLA on various datasets and empirically demonstrate its
consistent outperformance against existing symbolic solvers (including Z3 and
Kissat) and tool-learning methods in terms of efficiency in large-scale
problem-solving.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要