Learning Visual Engagement for Trauma Recovery

2018 IEEE Winter Applications of Computer Vision Workshops (WACVW)(2018)

引用 0|浏览35
暂无评分
摘要
Applications ranging from human emotion understanding to e-health are exploring methods to effectively understand user behavior from self-reported questionnaires. However, little is understood about non-invasive techniques that involve face-based deep-learning models to predict engagement. Current research in visual engagement poses two key questions: 1) how much time do we need to analyze facial behavior for accurate engagement prediction? and 2) which deep learning approach provides the most accurate predictions? In this paper we compare RNN, GRU and LSTM using different length segments of AUs. Our experiments show no significant difference in prediction accuracy when using anywhere between 15 and 90 seconds of data. Moreover, the results reveal that simpler models of recurrent networks are statistically significantly better suited for capturing engagement from AUs.
更多
查看译文
关键词
deep-learning models,visual engagement,facial behavior,deep learning approach,trauma recovery,human emotion understanding,user behavior,noninvasive techniques
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要