Multi-Modal Analysis Of Human Computer Interaction Using Automatic Inference Of Aural Expressions In Speech

2008 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), VOLS 1-6(2008)

引用 4|浏览1
暂无评分
摘要
This paper presents multi-modal analysis of human-computer interactions based on automatic inference of expressions in speech. It describes an automatic inference System that recognizes aural expressions of emotions, complex mental states and expression mixtures. The implementation is based on the observation that different vocal features distinguish different expressions. The system was trained on an English database (MindReading), and then was applied to a Hebrew multi-modal database of naturally evoked expressions (Doors). This paper describes the statistical and dynamic analysis of sustained interactions from the Doors database. The analysis is based on the correlation between the inferred expressions with events, physiological cues such as galvanic skin response and behavioural cues. The presented analysis indicates that the vocal expression of complex mental states such as thinking, certainty and interest are not necessarily unique to one language and culture. The system provides an analysis tool for sustained human computer interactions.
更多
查看译文
关键词
stress,games,speech,gain,databases,human computer interaction,speech processing,galvanic skin response,dynamic analysis,classification algorithms,feature extraction,modal analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要