Larimar: Large Language Models with Episodic Memory Control

Payel Das,Subhajit Chaudhury, Elliot Nelson,Igor Melnyk, Sarath Swaminathan,Sihui Dai,Aurélie Lozano, Georgios Kollias,Vijil Chenthamarakshan, Jiří, Navrátil,Soham Dan,Pin-Yu Chen

arxiv(2024)

引用 0|浏览110
暂无评分
摘要
Efficient and accurate updating of knowledge stored in Large Language Models (LLMs) is one of the most pressing research challenges today. This paper presents Larimar - a novel, brain-inspired architecture for enhancing LLMs with a distributed episodic memory. Larimar's memory allows for dynamic, one-shot updates of knowledge without the need for computationally expensive re-training or fine-tuning. Experimental results on multiple fact editing benchmarks demonstrate that Larimar attains accuracy comparable to most competitive baselines, even in the challenging sequential editing setup, but also excels in speed - yielding speed-ups of 4-10x depending on the base LLM - as well as flexibility due to the proposed architecture being simple, LLM-agnostic, and hence general. We further provide mechanisms for selective fact forgetting and input context length generalization with Larimar and show their effectiveness.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要