Micro-expression Video Clip Synthesis Method based on Spatial-temporal Statistical Model and Motion Intensity Evaluation Function

2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)(2020)

引用 0|浏览13
暂无评分
摘要
Micro-expression (ME) recognition is an effective method to detect lies and other subtle human emotions. Machine learning-based and deep learning-based models have achieved remarkable results recently. However, these models are vulnerable to overfitting issue due to the scarcity of ME video clips. These videos are much harder to collect and annotate than normal expression video clips, thus limiting the recognition performance improvement. To address this issue, we propose a micro-expression video clip synthesis method based on spatial-temporal statistical and motion intensity evaluation in this paper. In our proposed scheme, we establish a micro-expression spatial and temporal statistical model (MSTSM) by analyzing the dynamic characteristics of micro-expressions and deploy this model to provide the rules for micro-expressions video synthesis. In addition, we design a motion intensity evaluation function (MIEF) to ensure that the intensity of facial expression in the synthesized video clips is consistent with those in real -ME. Finally, facial video clips with MEs of new subjects can be generated by deploying the MIEF together with the widely-used 3D facial morphable model and the rules provided by the MSTSM. The experimental results have demonstrated that the accuracy of micro-expression recognition can be effectively improved by adding the synthesized video clips generated by our proposed method.
更多
查看译文
关键词
micro-expression video clip synthesis,micro-expression recognition,spatial and temporal statistical model,motion intensity evaluation,3D facial morphable model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要