Jolo-Gcn: Mining Joint-Centered Light-Weight Information For Skeleton-Based Action Recognition

2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021(2021)

引用 42|浏览68
暂无评分
摘要
Skeleton-based action recognition has attracted research attentions in recent years. One common drawback in currently popular skeleton-based human action recognition methods is that the sparse skeleton information alone is not sufficient to fully characterize human motion. This limitation makes several existing methods incapable of correctly classifying action categories which exhibit only subtle motion differences. In this paper, we propose a novel framework for employing human pose skeleton and joint-centered light-weight information jointly in a two-stream graph convolutional network, namely, JOLO-GCN. Specifically, we use Joint-aligned optical Flow Patches (JFP) to capture the local subtle motion around each joint as the pivotal joint-centered visual information. Compared to the pure skeleton-based baseline, this hybrid scheme effectively boosts performance, while keeping the computational and memory overheads low. Experiments on the NTU RGB+D, NTU RGB+D 120, and the Kinetics-Skeleton dataset demonstrate clear accuracy improvements attained by the proposed method over the state-of-the-art skeleton-based methods.
更多
查看译文
关键词
JOLO-GCN,mining joint-centered light-weight information,skeleton-based action recognition,currently popular skeleton-based human action recognition methods,sparse skeleton information,human motion,correctly classifying action categories,subtle motion differences,Joint-aligned optical Flow Patches,local subtle motion,pivotal joint-centered visual information,pure skeleton-based baseline,Kinetics-Skeleton dataset,state-of-the-art skeleton-based methods
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要