Epic-Fusion: Audio-Visual Temporal Binding For Egocentric Action Recognition

2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019)(2019)

引用 319|浏览0
暂无评分
摘要
We focus on multi-modal fusion for egocentric action recognition, and propose a novel architecture for multi-modal temporal-binding, i.e. the combination of modalities within a range of temporal offsets. We train the architecture with three modalities - RGB, Flow and Audio - and combine them with mid-level fusion alongside sparse temporal sampling of fused representations. In contrast with previous works, modalities are fused before temporal aggregation, with shared modality and fusion weights over time. Our proposed architecture is trained end-to-end, outperforming individual modalities as well as late-fusion of modalities.We demonstrate the importance of audio in egocentric vision, on per-class basis, for identifying actions as well as interacting objects. Our method achieves state of the art results on both the seen and unseen test sets of the largest egocentric dataset: EPIC-Kitchens, on all metrics using the public leaderboard.
更多
查看译文
关键词
egocentric action recognition,multimodal fusion,multimodal temporal-binding,temporal offsets,mid-level fusion,sparse temporal sampling,temporal aggregation,shared modality,fusion weights,individual modalities,late-fusion,egocentric vision,egocentric dataset,EPIC-fusion,audio-visual temporal binding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要