Annotating affective dimensions in user-generated content

Language Resources and Evaluation(2021)

引用 1|浏览10
暂无评分
摘要
In an era where user-generated content becomes ever more prevalent, reliable methods to judge emotional properties of these kinds of complex texts are needed, for example for developing corpora in machine learning contexts. In this study, we focus on Dutch Twitter messages, a genre which is high in emotional content and frequently investigated in the field of computational linguistics. We compare three annotation methods to annotate the emotional dimensions valence, arousal and dominance in 300 Tweets, namely rating scales, pairwise comparison and best–worst scaling. We evaluate the annotation methods on the criterion of inter-annotator agreement, based on judgments of 18 annotators in total. On this dataset, best–worst scaling has the highest inter-annotator agreement. We find that the difference in agreement is largest for dominance and smallest for valence, suggesting that the benefit of best–worst scaling becomes more pronounced as the annotation task gets more difficult. However, we also find that best–worst scaling is particularly more time-consuming than are rating scale and pairwise comparison annotations. This leads us to conclude that, in particular when dealing with computational models, a comparative assessment of quality versus costs needs to be made.
更多
查看译文
关键词
User-generated content, Emotion annotation, Best–worst scaling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要