Enolp musk@SMM4H'22 : Leveraging Pre-trained Language Models for Stance And Premise Classification.

Millon Das, Archit Mangrulkar, Ishan Manchanda,Manav Kapadnis,Sohan Patnaik

International Conference on Computational Linguistics(2022)

引用 1|浏览2
暂无评分
摘要
This paper covers our approaches for the Social Media Mining for Health (SMM4H) Shared Tasks 2a and 2b. Apart from the baseline architectures, we experiment with Parts of Speech (PoS), dependency parsing, and Tf-Idf features. Additionally, we perform contrastive pretraining on our best models using a supervised contrastive loss function. In both the tasks, we outperformed the mean and median scores and ranked first on the validation set. For stance classification, we achieved an F1-score of 0.636 using the CovidTwitterBERT model, while for premise classification, we achieved an F1-score of 0.664 using BART-base model on test dataset.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要