WeChat Mini Program
Old Version Features

Learning to Edit: Aligning LLMs with Knowledge Editing

Annual Meeting of the Association for Computational Linguistics (ACL)(2024)CCF A

The Hong Kong University of Science and Technology (Guangzhou) 1

Cited 18|Views111
Abstract
Knowledge editing techniques, aiming to efficiently modify a minor proportionof knowledge in large language models (LLMs) without negatively impactingperformance across other inputs, have garnered widespread attention. However,existing methods predominantly rely on memorizing the updated knowledge,impeding LLMs from effectively combining the new knowledge with their inherentknowledge when answering questions. To this end, we propose a Learning to Edit(LTE) framework, focusing on teaching LLMs to apply updated knowledge intoinput questions, inspired by the philosophy of "Teach a man to fish." LTEfeatures a two-phase process: (i) the Alignment Phase, which fine-tunes LLMs ona meticulously curated parallel dataset to make reliable, in-scope edits whilepreserving out-of-scope information and linguistic proficiency; and (ii) theInference Phase, which employs a retrieval-based mechanism for real-time andmass knowledge editing. By comparing our approach with seven advanced baselinesacross four popular knowledge editing benchmarks and two LLM architectures, wedemonstrate LTE's superiority in knowledge editing performance, robustness inboth batch and sequential editing, minimal interference on general tasks, andrapid editing speeds. The data and code are available athttps://github.com/YJiangcm/LTE.
More
Translated text
Key words
Library Linked Data
PDF
Bibtex
AI Read Science
Video&Figures
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:Learning to Edit (LTE)框架通过两阶段过程来教导大型语言模型(LLMs)如何将更新的知识应用到输入的问题中,以实现高效修改知识而不影响性能,并在知识编辑性能、鲁棒性和速度上优于七种高级基准。

方法】:LTE框架包括对LLMs进行精细调整以可靠地进行范围内编辑和保留范围外信息和语言能力的对齐阶段,以及利用基于检索的机制进行实时和大规模知识编辑的推理阶段。

实验】:LTE框架在四个知识编辑基准和两个LLMs架构上与七种高级基准进行比较,展示了在知识编辑性能、通用任务干扰最小和快速编辑速度上的优越性,并提供了可在https://github.com/YJiangcm/LTE 获得的数据和代码。