Consistent But Modest: A Meta-Analysis On Unimodal And Multimodal Affect Detection Accuracies From 30 Studies

ICMI-MLMI(2012)

引用 67|浏览20
暂无评分
摘要
The recent influx of multimodal affect classifiers raises the important question of whether these classifiers yield accuracy rates that exceed their unimodal counterparts. This question was addressed by performing a meta-analysis on 30 published studies that reported both multimodal and unimodal affect detection accuracies. The results indicated that multimodal accuracies were consistently better than unimodal accuracies and yielded an average 8.12% improvement over the best unimodal classifiers. However, performance improvements were three times lower when classifiers were trained on natural or seminatural data (4.39% improvement) compared to acted data (12.1% improvement). Importantly, performance of the best unimodal classifier explained an impressive 80.6% (cross-validated) of the variance in multimodal accuracy. The results also indicated that multimodal accuracies were substantially higher than accuracies of the second-best unimodal classifiers (an average improvement of 29.4%) irrespective of the naturalness of the training data. Theoretical and applied implications of the findings are discussed.
更多
查看译文
关键词
Affect detection,emotion detection,affective computing,multimodal affect detection,meta-analysis,review
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要