Zero-shot Sentiment Analysis in Low-Resource Languages Using a Multilingual Sentiment Lexicon
CoRR(2024)
摘要
Improving multilingual language models capabilities in low-resource languages
is generally difficult due to the scarcity of large-scale data in those
languages. In this paper, we relax the reliance on texts in low-resource
languages by using multilingual lexicons in pretraining to enhance multilingual
capabilities. Specifically, we focus on zero-shot sentiment analysis tasks
across 34 languages, including 6 high/medium-resource languages, 25
low-resource languages, and 3 code-switching datasets. We demonstrate that
pretraining using multilingual lexicons, without using any sentence-level
sentiment data, achieves superior zero-shot performance compared to models
fine-tuned on English sentiment datasets, and large language models like
GPT–3.5, BLOOMZ, and XGLM. These findings are observable for unseen
low-resource languages to code-mixed scenarios involving high-resource
languages.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要