K-Level Reasoning with Large Language Models
CoRR(2024)
摘要
While Large Language Models (LLMs) have demonstrated their proficiency in
complex reasoning tasks, their performance in dynamic, interactive, and
competitive scenarios - such as business strategy and stock market analysis -
remains underexplored. To bridge this gap, we formally explore the dynamic
reasoning capabilities of LLMs for decision-making in rapidly evolving
environments. We introduce two game theory-based pilot challenges that mirror
the complexities of real-world dynamic decision-making. These challenges are
well-defined, enabling clear, controllable, and precise evaluation of LLMs'
dynamic reasoning abilities. Through extensive experiments, we find that
existing reasoning methods tend to falter in dynamic settings that require
k-level thinking - a key concept not tackled by previous works. To address
this, we propose a novel reasoning approach for LLMs, named "K-Level
Reasoning". This approach adopts the perspective of rivals to recursively
employ k-level thinking based on available historical information, which
significantly improves the prediction accuracy of rivals' subsequent moves and
informs more strategic decision-making. This research not only sets a robust
quantitative benchmark for the assessment of dynamic reasoning but also
markedly enhances the proficiency of LLMs in dynamic contexts.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要