Dynamic Multiscale Graph Neural Networks for 3D Skeleton-Based Human Motion Prediction

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2020)

引用 277|浏览66
暂无评分
摘要
We propose novel dynamic multiscale graph neural networks (DMGNN) to predict 3D skeleton-based human motions. The core idea of DMGNN is to use a multiscale graph to comprehensively model the internal relations of a human body for motion feature learning. This multiscale graph is adaptive during training and dynamic across network layers. Based on this graph, we propose a multiscale graph computational unit (MGCU) to extract features at individual scales and fuse features across scales. The entire model is action-category-agnostic and follows an encoder-decoder framework. The encoder consists of a sequence of MGCUs to learn motion features. The decoder uses a proposed graph-based gate recurrent unit to generate future poses. Extensive experiments show that the proposed DMGNN outperforms state-of-the-art methods in both short and long-term predictions on the datasets of Human 3.6M and CMU Mocap. We further investigate the learned multiscale graphs for the interpretability. The codes could be downloaded from https://github.com/limaosen0/DMGNN.
更多
查看译文
关键词
DMGNN,human body,motion feature learning,multiscale graph computational unit,graph-based gate recurrent unit,dynamic multiscale graph neural networks,3D skeleton-based human motion prediction,MGCU,feature extraction,action-category-agnostic,encoder-decoder framework,image fusion,future pose generation,CMU Mocap,Human 3.6M,short and long-term predictions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要