WeChat Mini Program
Old Version Features

Abstraction-of-Thought Makes Language Models Better Reasoners

EMNLP 2024(2024)

Cited 0|Views13
Abstract
Abstract reasoning, the ability to reason from the abstract essence of aproblem, serves as a key to generalization in human reasoning. However,eliciting language models to perform reasoning with abstraction remainsunexplored. This paper seeks to bridge this gap by introducing a novelstructured reasoning format called Abstraction-of-Thought (AoT). The uniquenessof AoT lies in its explicit requirement for varying levels of abstractionwithin the reasoning process. This approach could elicit language models tofirst contemplate on the abstract level before incorporating concrete details,which is overlooked by the prevailing step-by-step Chain-of-Thought (CoT)method. To align models with the AoT format, we present AoT Collection, ageneric finetuning dataset consisting of 348k high-quality samples with AoTreasoning processes, collected via an automated and scalable pipeline. Wefinetune a wide range of language models with AoT Collection and conductextensive evaluations on 23 unseen tasks from the challenging benchmarkBig-Bench Hard. Experimental results indicate that models aligned to AoTreasoning format substantially outperform those aligned to CoT in manyreasoning tasks.
More
Translated text
PDF
Bibtex
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined