Constrained Decoding for Cross-lingual Label Projection
CoRR(2024)
摘要
Zero-shot cross-lingual transfer utilizing multilingual LLMs has become a
popular learning paradigm for low-resource languages with no labeled training
data. However, for NLP tasks that involve fine-grained predictions on words and
phrases, the performance of zero-shot cross-lingual transfer learning lags far
behind supervised fine-tuning methods. Therefore, it is common to exploit
translation and label projection to further improve the performance by (1)
translating training data that is available in a high-resource language (e.g.,
English) together with the gold labels into low-resource languages, and/or (2)
translating test data in low-resource languages to a high-source language to
run inference on, then projecting the predicted span-level labels back onto the
original test data. However, state-of-the-art marker-based label projection
methods suffer from translation quality degradation due to the extra label
markers injected in the input to the translation model. In this work, we
explore a new direction that leverages constrained decoding for label
projection to overcome the aforementioned issues. Our new method not only can
preserve the quality of translated texts but also has the versatility of being
applicable to both translating training and translating test data strategies.
This versatility is crucial as our experiments reveal that translating test
data can lead to a considerable boost in performance compared to translating
only training data. We evaluate on two cross-lingual transfer tasks, namely
Named Entity Recognition and Event Argument Extraction, spanning 20 languages.
The results demonstrate that our approach outperforms the state-of-the-art
marker-based method by a large margin and also shows better performance than
other label projection methods that rely on external word alignment.
更多查看译文
关键词
constrained decoding,label projection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要