IMU and Smartphone Camera Fusion for Knee Adduction and Knee Flexion Moment Estimation During Walking

IEEE Transactions on Industrial Informatics(2023)

引用 5|浏览138
暂无评分
摘要
Wearable sensing and computer vision could move biomechanics from specialized laboratories to natural environments, but better algorithms are needed to extract meaningful outcomes from these emerging modalities. In this article, we present new models for estimating biomechanical outcomes—the knee adduction moment (KAM) and knee flexion moment (KFM)—from fusion of smartphone cameras and wearable inertial measurement units (IMUs) among young healthy nonobese males. A deep learning model was developed to extract features, fuse multimodal data, and estimate KAM and KFM. Walking data from 17 subjects were recorded with eight IMUs and two smartphone cameras. The model that used IMU-camera fusion was significantly more accurate than those using IMUs or cameras alone. The root-mean-square errors of the fusion model were 0.49 $\%\;\mathbf {BW}\cdot \mathbf {BH}$ for KAM and 0.66 $\%\;\mathbf {BW}\cdot \mathbf {BH}$ for KFM estimation, which are lower than clinically significant thresholds. With larger and more diverse data, this model could enable assessment of knee moments in clinics and homes.
更多
查看译文
关键词
Deep learning,joint kinetics,osteoarthritis (OA),portable sensing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要