IA-BERT: context-aware sarcasm detection by incorporating incongruity attention layer for feature extraction.

Ida Ayu Putu Ari Crisdayanti,JinYeong Bak,YunSeok Choi,Jee-Hyong Lee

ACM Symposium on Applied Computing (SAC)(2022)

引用 0|浏览0
暂无评分
摘要
Sarcasm as a form of figurative language has been widely used to implicitly convey an offensive opinion. While preliminary researches have constantly tried to identify the sarcasm lying in a text directly from tokens within the text, it is insufficient because sarcasm does not have specific vocabularies as in polarized sentences. Especially in threads or discussions, sarcasm can be identified after getting the context information from previous replies or discussions. To this end, we propose IA-BERT, a model architecture that considers contextual information to identify incongruity features that lie in sarcastic texts. IA-BERT is embedded with a feature attention layer that combines features extracted from the response alone and interactive features obtained from the context and the response. The model leverages BERT pretrained embedding and yields a performance improvement from the standard fine-tuned BERT classifier. IA-BERT also outperforms the sophisticated architecture of LCF-BERT in the accuracy and F1-score.
更多
查看译文
关键词
Sarcasm Detection, Context-Aware, Incongruity Attention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要