Self-Constructing Temporal Excitation Graph for Skeleton-Based Action Recognition

IEEE Sensors Journal(2023)

引用 0|浏览9
暂无评分
摘要
Graph convolutional network (GCN)-based methods have obtained remarkable performance and gained widespread attention for skeleton-based human action recognition. These methods typically apply 1-D local convolutions to model temporal correlations and simply utilize multilayer stacking to capture long-range temporal dynamics. However, the 1-D local convolution focuses on the relations between the adjacent time steps. Also, with the repeat of a lot of local convolutions, the key temporal relation with nonadjacent temporal distance may be ignored due to the information dilution. Therefore, it remains unclear how to fully explore the temporal dynamics of skeleton sequences. In this article, we propose a temporal excitation GCN (TE-GCN) to exploit a self-constructing temporal relation graph for capturing complex temporal dynamics. Specifically, the constructed temporal relation graph explicitly establishes the connections between semantically related temporal features to adaptively capture the temporal relations between the skeleton sequences. Meanwhile, to further explore the sufficient temporal dynamics concurrently, a multihead mechanism is designed to investigate multikinds of temporal relations. Extensive experiments are performed on two widely used large-scale datasets, NTU-60 RGB+D and NTU-120 RGB+D. Also, experimental results show that the proposed model obtains significant improvements by making contribution to temporal modeling for action recognition.
更多
查看译文
关键词
Graph convolutional networks (GCNs), skeleton-based action recognition, temporal relation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要