Compress to Impress: Unleashing the Potential of Compressive Memory in Real-World Long-Term Conversations
CoRR(2024)
摘要
Existing retrieval-based methods have made significant strides in maintaining
long-term conversations. However, these approaches face challenges in memory
database management and accurate memory retrieval, hindering their efficacy in
dynamic, real-world interactions. This study introduces a novel framework,
COmpressive Memory-Enhanced Dialogue sYstems (COMEDY), which eschews
traditional retrieval modules and memory databases. Instead, COMEDY adopts a
”One-for-All” approach, utilizing a single language model to manage memory
generation, compression, and response generation. Central to this framework is
the concept of compressive memory, which intergrates session-specific
summaries, user-bot dynamics, and past events into a concise memory format. To
support COMEDY, we curated a large-scale Chinese instruction-tuning dataset,
Dolphin, derived from real user-chatbot interactions. Comparative evaluations
demonstrate COMEDY's superiority over traditional retrieval-based methods in
producing more nuanced and human-like conversational experiences. Our codes are
available at https://github.com/nuochenpku/COMEDY.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要