Roman Urdu Sentiment Analysis Using Pre-trained DistilBERT and XLNet

Nikhar Azhar,Seemab Latif

2022 Fifth International Conference of Women in Data Science at Prince Sultan University (WiDS PSU)(2022)

引用 2|浏览0
暂无评分
摘要
Roman Urdu is a resource-poor language, therefore training a deep learning-based model from scratch is not that fruitful due to the lack of a large dataset. This is where the magic of transfer learning comes to the rescue. Using Huggingface's transformer models DistilBERT and XLNet we see a huge improvement in the results compared to the popular machine learning models Logistic Regression and Naïv...
更多
查看译文
关键词
Natural Language Processing,Roman Urdu,Sentiment Analysis,BERT,XLNet
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要