Examining Modularity in Multilingual LMs via Language-Specialized Subnetworks.
CoRR(2023)
摘要
Recent work has proposed explicitly inducing language-wise modularity in
multilingual LMs via sparse fine-tuning (SFT) on per-language subnetworks as a
means of better guiding cross-lingual sharing. In this work, we investigate (1)
the degree to which language-wise modularity naturally arises within models
with no special modularity interventions, and (2) how cross-lingual sharing and
interference differ between such models and those with explicit SFT-guided
subnetwork modularity. To quantify language specialization and cross-lingual
interaction, we use a Training Data Attribution method that estimates the
degree to which a model's predictions are influenced by in-language or
cross-language training examples. Our results show that language-specialized
subnetworks do naturally arise, and that SFT, rather than always increasing
modularity, can decrease language specialization of subnetworks in favor of
more cross-lingual sharing.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要