Automated characterization of mouth activity for stress and anxiety assessment

2016 IEEE International Conference on Imaging Systems and Techniques (IST)(2016)

引用 3|浏览31
暂无评分
摘要
Non-verbal information portrayed by human facial expression, apart from emotional cues also encompasses information relevant to psychophysical status. Mouth activities in particular have been found to correlate with signs of several conditions; depressed people smile less, while those in fatigue yawn more. In this paper, we present a semi-automated, robust and efficient algorithm for extracting mouth activity from video recordings based on Eigen-features and template-matching. The algorithm was evaluated for mouth openings and mouth deformations, on a minimum specification dataset of 640×480 resolution and 15 fps. The extracted features were the signals of mouth expansion (openness estimation) and correlation (deformation estimation). The achieved classification accuracy reached 89.17%. A second series of experimental results, for the preliminary evaluation of the proposed algorithm in assessing stress/anxiety, took place using an additional dataset. The proposed algorithm showed consistent performance across both datasets, which indicates high robustness. Furthermore, normalized openings per minute, and average openness intensity were extracted as video-based features, resulting in a significant difference between video recordings of stressed/anxious versus relaxed subjects.
更多
查看译文
关键词
mouth gesture recognition,image processing,stress,anxiety,automatic assessment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要