Bi-Directional Modality Fusion Network For Audio-Visual Event Localization

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)(2022)

引用 8|浏览13
暂无评分
摘要
Audio and visual signals stimulate many audio-visual sensory neurons of persons to generate audio-visual contents, helping humans perceive the world. Most of the existing audio-visual event localization approaches focus on generating audio-visual features by fusing the audio and visual modalities for final predictions. However, an audio-visual adjustment mechanism exists in a complicated multi-modal perception system. Inspired by this observation, we propose a novel bi-directional modality fusion network (BMFN), which not only simply fuses audio and visual features, but also adjusts the fused features to increase their representativeness with the help of the original audio and visual contents. The high-level audio-visual features achieved from two directions with two forward-backward fusion modules and a mean operation are summarized for the final event localization. Experimental results demonstrate that our method outperforms state-of-the-art works in both fully- and weakly- supervised learning settings. The code is available at https://github.com/weizequan/BMFN.git.
更多
查看译文
关键词
Event Localization,Bi-Directional,Audio-Visual Modality Fusion,Multi-Modal Perception System
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要