Tower: An Open Multilingual Large Language Model for Translation-Related Tasks
CoRR(2024)
摘要
While general-purpose large language models (LLMs) demonstrate proficiency on
multiple tasks within the domain of translation, approaches based on open LLMs
are competitive only when specializing on a single task. In this paper, we
propose a recipe for tailoring LLMs to multiple tasks present in translation
workflows. We perform continued pretraining on a multilingual mixture of
monolingual and parallel data, creating TowerBase, followed by finetuning on
instructions relevant for translation processes, creating TowerInstruct. Our
final model surpasses open alternatives on several tasks relevant to
translation workflows and is competitive with general-purpose closed LLMs. To
facilitate future research, we release the Tower models, our specialization
dataset, an evaluation framework for LLMs focusing on the translation
ecosystem, and a collection of model generations, including ours, on our
benchmark.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要