Mitigating Reversal Curse in Large Language Models via Semantic-aware Permutation Training
arxiv(2024)
摘要
While large language models (LLMs) have achieved impressive performance
across diverse tasks, recent studies showcase that causal LLMs suffer from the
"reversal curse". It is a typical example that the model knows "A's father is
B", but is unable to reason "B's child is A". This limitation poses a challenge
to the advancement of artificial general intelligence (AGI), as it suggests a
gap in the models' ability to comprehend and apply bidirectional reasoning. In
this paper, we first conduct substantial evaluation and identify that the root
cause of the reversal curse lies in the different word order between the
training and inference stage, namely, the poor ability of causal language
models to predict antecedent words within the training data. Accordingly,
permutation on the training data is considered as a potential solution, since
this can make the model predict antecedent words or tokens. However, previous
permutation methods may disrupt complete phrases or entities, thereby posing
challenges for the model to comprehend and learn from training data. To address
this issue, we propose Semantic-aware Permutation Training (SPT), which
addresses this issue by segmenting the training sentences into semantic units
(i.e., entities or phrases) with an assistant language model and permuting
these units before feeding into the model. Extensive experiments demonstrate
that SPT effectively mitigates the reversal curse since the performance on
reversed questions approximates that on the forward ones, and significantly
advances the performance of existing works.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要