Graph Interaction Networks for Relation Transfer in Human Activity Videos

Periodicals(2020)

引用 28|浏览220
暂无评分
摘要
AbstractRecent years have witnessed rapid progress in employing graph convolutional networks (GCNs) for various video analysis tasks where graph-based data abound. However, exploring the transferable knowledge between different graphs, which is a direction with wide and potential applications, has been rarely studied. To address this issue, we propose a graph interaction networks (GINs) model for transferring relation knowledge across two graphs. Different from conventional domain adaptation or knowledge distillation approaches, our GINs focus on a “self-learned” weight matrix, which is a higher-level representation of the input data. And each element of the weight matrix represents the pair-wise relation among different nodes within the graph. Moreover, we guide the networks to transfer the knowledge across the weight matrices by designing a task-specific loss function, so that the relation information is well preserved during transfer. We conduct experiments on two different scenarios for video analysis, including a new proposed setting for unsupervised skeleton-based action recognition across different datasets, and supervised group activity recognition with multi-modal inputs. Extensive experiments on six widely used datasets illustrate that our GINs achieve very competitive performance in comparison with the state-of-the-arts.
更多
查看译文
关键词
Videos, Task analysis, Activity recognition, Feature extraction, Knowledge transfer, Knowledge engineering, Convolution, Graph convolutional network, skeleton-based action recognition, group activity recognition, transfer learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要