Applying Speech Derived Breathing Patterns to Automatically Classify Human Confidence

2023 31st European Signal Processing Conference (EUSIPCO)(2023)

引用 0|浏览0
暂无评分
摘要
Non-verbal expressions of speech are used to understand a spectrum of human behaviour parameters; one of them being confidence. Several speech representation techniques, from hand-crafted features to auto-encoder representations, are explored for mining such information. We introduce a deep network trained with 100 speakers' data for the extraction of breathing patterns from the speech signals. This network gives an average Pearson's correlation coefficient of 0.61 and a breaths-per-minute error of 2.5 across 100 speakers. In this paper, we propose the novel use of speech-derived breathing patterns as the feature set for the binary classification of confidence levels. The classification model trained with the data from 51 interview candidates gives an average AUC of 76 % in classifying the confident speakers from the non-confident ones using breathing patterns as the feature set. On comparing this performance with that of Mel frequency cepstral coefficients and auto-encoder representations, we observe an absolute improvement of 8 % and 5 % respectively.
更多
查看译文
关键词
breathing,affective computing,time-series analysis,computational paralinguistics,human confidence classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要