Detection and analysis model for grammatical facial expressions in sign language

2016 IEEE Region 10 Symposium (TENSYMP)(2016)

引用 6|浏览0
暂无评分
摘要
The proposed research explores a relatively new area of expression detection through facial points in a sign language to enhance the computer interaction with the deaf and hard of hearing. The research mainly focuses on facial points collected from Kinect as basis for expression detection as opposed to numerous gesture based studies on sign language. This helps in deploying the applications in smart phones as it is feasible to capture facial point easily rather than hand gestures. Exhaustive experimentation is carried out with ten different machine learning algorithms for detecting nine different types of expression modeled as different binary classification problem for each expression. This is done for user dependent model and user independent model scenarios. The optimal classifier for each expression is found to outperform the current state-of-the-art techniques and has ROC area greater than 0.95 for each expression. It is found that user independent model's performance is comparable to user dependent model, hence is suggested as it is easy and efficient to deploy in practical applications. Finally, the importance of each facial point in detecting each type of expression has been mined, which can be instrumental for future research and for various application using facial points as basis for decision making.
更多
查看译文
关键词
grammatical facial expression detection,sign language,facial points,computer interaction,deaf,Kinect,smart phones,machine learning algorithms,binary classification problem,user dependent model,user independent model,decision making
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要