An Ensemble BERT Model for English-Hindi-Marathi Multilingual Question Answering

Lecture notes in networks and systems(2023)

引用 0|浏览0
暂无评分
摘要
Multilingual Question Answering (MQA) is an application of NLP that deals with retrieving an accurate answer to the user’s question from the context despite the language of the question and context. Currently, multilingual-cased BERT models perform well for all fine-tuning settings, such as zero-shot, monolingual, cross-lingual, and multilingual settings of multilingual tasks of NLP. Of which the recent multilingual-cased BERT models are mBERT and IndicBERT, pre-trained on multiple languages of the world (104 languages) and India (17 languages), respectively. Hence, in this work, mBERT and IndicBERT are ensembled and then fine-tuned by training them on the in-house ILMQuAD dataset for all three languages. The obtained result evidenced that the ensemble BERT model with fine-tuning improves the result and becomes the new state-of-the-art performance for all three languages. The proposed ensemble BERT model is also evaluated on the MMQA, Translated SQuAD, MLQA, and XQuAD evaluation dataset for the English-Hindi MQA task.
更多
查看译文
关键词
ensemble bert model,english-hindi-marathi
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要