A comparison between audio and IMU data to detect chewing events based on an earable device

AH '20: 11th Augmented Human International Conference Winnipeg Manitoba Canada May, 2020(2020)

引用 14|浏览83
暂无评分
摘要
The feasibility of collecting various data from built-in wearable sensors has enticed many researchers to use these devices for analyzing human activities and behaviors. In particular, audio, video, and motion data have been utilized for automatic dietary monitoring. In this paper, we investigate the feasibility of detecting chewing activities based on audio and inertial sensor data obtained from an ear-worn device, eSense. We process each sensor data separately and determine the accuracy of each sensing modality for chewing detection when using MFCC and Spectral Centroid as features and Logistic Regression, Decision Tree, and Random Forest as classifiers. We also measure the performance of chewing detection when fusing features extracted from both audio and inertial sensor data. We evaluate the chewing detection algorithm by running a pilot study inside a lab environment on a total of 5 participants. This consists of 130 minutes audio and inertial measurement unit (IMU) data. The results of this study indicate that an in-ear IMU with an accuracy of 95% outperforms audio data in detecting chewing and fusing both modalities improves the accuracy to 97%.
更多
查看译文
关键词
Earables, Chewing Detection, IMU, Audio, MFCC, Spectral Centroid, Machine Learning Pipeline
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要