Distinction Of Stress And Non-Stress Tasks Using Facial Action Units

ICMI'18: PROCEEDINGS OF THE 20TH INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION: ADJUNCT(2018)

引用 5|浏览26
暂无评分
摘要
Long-exposure to stress is known to lead to physical and mental health problems. But how can we as individuals track and monitor our stress? Wearables which measure heart variability have been studied to detect stress. Such devices, however, need to be worn all day long and can be expensive. As an alternative, we propose the use of frontal face videos to distinguish between stressful and non-stressful activities. Affordable personal tracking of stress levels could be obtained by analyzing the video stream of inbuilt cameras in laptops. In this work, we present a preliminary analysis of 114 one-hour long videos. During the video, the subjects perform a typing exercise before and after being exposed to a stressor. We performed a binary classification using Random Forest (RF) to distinguish between stressful and non-stressful activities. As features, facial action units (AUs) extracted from each video frame were used. We obtained an average accuracy of over 97% and 50% for subject dependent and subject independent classification, respectively.
更多
查看译文
关键词
stress detection, affective computing, facial action units
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要