mOthello: When Do Cross-Lingual Representation Alignment and Cross-Lingual Transfer Emerge in Multilingual Models?
arxiv(2024)
摘要
Many pretrained multilingual models exhibit cross-lingual transfer ability,
which is often attributed to a learned language-neutral representation during
pretraining. However, it remains unclear what factors contribute to the
learning of a language-neutral representation, and whether the learned
language-neutral representation suffices to facilitate cross-lingual transfer.
We propose a synthetic task, Multilingual Othello (mOthello), as a testbed to
delve into these two questions. We find that: (1) models trained with naive
multilingual pretraining fail to learn a language-neutral representation across
all input languages; (2) the introduction of "anchor tokens" (i.e., lexical
items that are identical across languages) helps cross-lingual representation
alignment; and (3) the learning of a language-neutral representation alone is
not sufficient to facilitate cross-lingual transfer. Based on our findings, we
propose a novel approach - multilingual pretraining with unified output space -
that both induces the learning of language-neutral representation and
facilitates cross-lingual transfer.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要