Multimodal Analysis of Physiological Signals for Wearable-Based Emotion Recognition Using Machine Learning

2022 Computing in Cardiology (CinC)(2022)

引用 0|浏览5
暂无评分
摘要
Recent advancements in wearable technology and machine learning have led to an increased research interest in the use of peripheral physiological signals to recognize emotion granularity. In healthcare, the ability to create an algorithm that classifies emotion content can aid in the development of treatment protocols for psychopathology and chronic disease. The non-invasive nature of peripheral physiological signals however is usually of low quality due to low sampling rates. As a result, single-mode physiological signal-based emotion recognition shows low performance. In this research, we explore the use of multi-modal wearable-based emotion recognition using the K-EmoCon dataset. Physiological signals in addition to self-reported arousal and valence records were analyzed with a battery of datamining algorithms including decision trees, support vector machines, k-nearest neighbors, and ensembles. Performance was evaluated using accuracy, true positive rate, and area under the receiver operating characteristic curve. Results support the assumption with 83% average accuracy when using an ensemble bagged tree algorithm compared to single heart rate-based emotion accuracy of 56.1%. Emotion granularity can be identified by wearables with multi-modal signal recording capabilities that improve diagnostics and possibly treatment efficacy.
更多
查看译文
关键词
emotion recognition,physiological signals,multimodal analysis,wearable-based
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要