Learning to Decode Collaboratively with Multiple Language Models
arxiv(2024)
摘要
We propose a method to teach multiple large language models (LLM) to
collaborate by interleaving their generations at the token level. We model the
decision of which LLM generates the next token as a latent variable. By
optimizing the marginal likelihood of a training set under our latent variable
model, the base LLM automatically learns when to generate itself and when to
call on one of the “assistant” language models to generate, all without
direct supervision. Token-level collaboration during decoding allows for a
fusion of each model's expertise in a manner tailored to the specific task at
hand. Our collaborative decoding is especially useful in cross-domain settings
where a generalist base LLM learns to invoke domain expert models. On
instruction-following, domain-specific QA, and reasoning tasks, we show that
the performance of the joint system exceeds that of the individual models.
Through qualitative analysis of the learned latent decisions, we show models
trained with our method exhibit several interesting collaboration patterns,
e.g., template-filling. Our code is available at
https://github.com/clinicalml/co-llm.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要