Hand Gesture Recognition across Various Limb Positions Using a Multi-Modal Sensing System based on Self-adaptive Data-Fusion and Convolutional Neural Networks (CNNs)

Shen Zhang,Hao Zhou, Rayane Tchantchane,Gursel Alici

IEEE Sensors Journal(2024)

引用 0|浏览2
暂无评分
摘要
This study explores the challenge of hand gesture recognition across various limb positions using a new co-located multi-modal armband system incorporating Surface Electromyography (sEMG) and Pressure-based Force Myography (pFMG) sensors. Conventional Machine Learning (ML) algorithms and Convolutional Neural Networks models (CNNs) were evaluated for accurately recognizing hand gestures. A comprehensive investigation was conducted, encompassing feature-level and decision-level CNN models, alongside advanced fusion techniques to enhance the recognition performance. This research consistently demonstrates the superiority of CNN models, revealing their potential in extracting intricate patterns from raw multi-modal sensor data. The study showcased significant accuracy improvements over single-modality approaches, emphasizing the synergistic effects of multi-modal sensing. Notably, the CNN models achieved an 88.34% accuracy for self-adaptive decision-level fusion and 87.79% accuracy for feature-level fusion, outperforming the Linear Discriminant Analysis (LDA) with 83.33% accuracy when considering all nine gestures. Furthermore, the study explores the relationship between the number of hand gestures and recognition accuracy, revealing consistently high accuracy levels ranging from 88% to 100% for 2-9 gestures and a remarkable 98% accuracy for the commonly used five gestures. This research underscores the adaptability of CNNs in effectively capturing the complex complementation between multi-modal data and varying limb positions, advancing the field of gesture recognition, and emphasizing the potential of high-level data-fusion deep learning (DL) techniques in wearable sensing systems. This study provides valuable contributions into how multi-modal sensor/data fusion, coupled with advanced ML methods, enhances hand gesture recognition accuracy, ultimately paving the way for more effective and adaptable wearable technology applications.
更多
查看译文
关键词
human-machine interface (HMI),hand gesture recognition,multi-modal sensing,data fusion,sensor fusion,deep learning,limb position effect
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要