Human Activity Recognition from Multiple Sensors Data Using Multi-fusion Representations and CNNs

ACM Transactions on Multimedia Computing, Communications, and Applications(2020)

引用 28|浏览65
暂无评分
摘要
With the emerging interest in the ubiquitous sensing field, it has become possible to build assistive technologies for persons during their daily life activities to provide personalized feedback and services. For instance, it is possible to detect an individual’s behavioral pattern (e.g., physical activity, location, and mood) by using sensors embedded in smart-watches and smartphones. The multi-sensor environments also come with some challenges, such as how to fuse and combine different sources of data. In this article, we explore several methods of fusion for multi-representations of data from sensors. Furthermore, multiple representations of sensor data were generated and then fused using data-level, feature-level, and decision-level fusions. The presented methods were evaluated using three publicly available human activity recognition (HAR) datasets. The presented approaches utilize Deep Convolutional Neural Networks (CNNs). A generic architecture for fusion of different sensors is proposed. The proposed method shows promising performance, with the best results reaching an overall accuracy of 98.4% for the Context-Awareness via Wrist-Worn Motion Sensors (HANDY) dataset and 98.7% for the Wireless Sensor Data Mining (WISDM version 1.1) dataset. Both results outperform previous approaches.
更多
查看译文
关键词
CNN,Data fusion,activity recognition,deep learning,multimodal sensors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要