Transfer Learning for Wearable Long-Term Social Speech Evaluations.

IEEE ACCESS(2018)

引用 11|浏览28
暂无评分
摘要
With an increase of stress in work and study environments, mental health issue has become a major subject in current social interaction research. Generally, researchers analyze psychological health states by using the social perception behavior. Speech signal processing is an important research direction, as it can objectively assess the mental health of a person from social sensing through the extraction and analysis of speech features. In this paper, a series of four-week long-term social monitoring experiment study using the proposed wearable device has been conducted. A set of well-being questionnaires among of a group of students is employed to objectively generate a relationship between physical and mental health with segmented speech-social features in completely natural daily situation. In particular, we have developed transfer learning for acoustic classification. By training the model on the TUT Acoustic Scenes 2017 data set, the model learns the basic scene features. Through transfer learning, the model is transferred to the audio segmentation process using only four wearable speech-social features (energy, entropy, brightness, and formant). The obtained results have shown promising results in classifying various acoustic scenes in unconstrained and natural situations using the wearable long-term speech-social data set.
更多
查看译文
关键词
Long-term social monitoring,psychological,transfer learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要