Multi-Modal Gesture Recognition With Voting-Based Dynamic Time Warping

Yiqun Kuang,Hong Cheng, Jiasheng Hao, Ruimeng Xie,Fang Cui

INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS(2019)

引用 0|浏览20
暂无评分
摘要
Gesture recognition has remained a challenging problem in the fields of human robot interaction. With the development of depth sensors such as Kinect, different modalities become available for gesture recognition while its advantages have not been fully exploited. One of the critical issues for multi-modal gesture recognition is how to fuse features from different modalities. In this article, we present a unified framework for multi-modal gesture recognition based on dynamic time warping. The 3D implicit shape model is applied to characterize the space-time structure of the local features extracted from different modalities. And then, all votes from the local features are incorporated into a common probability space which is then used for building the distance matrix. Meanwhile, an upper-bounding method UB_Pro is proposed to speed up dynamic time warping. The proposed approach is evaluated on the challenging ChaLearn Isolated Gesture Dataset, showing comparable performance in comparison to the state-of-the-art approaches for multi-modal gesture recognition problem.
更多
查看译文
关键词
Multi-modal, gesture interaction, local feature, dynamic time warping
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要