AUDIO-VISUAL EVENT RECOGNITION THROUGH THE LENS OF ADVERSARY

2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021)(2021)

引用 9|浏览92
暂无评分
摘要
As audio/visual classification models are widely deployed for sensitive tasks like content filtering at scale, it is critical to understand their robustness along with improving the accuracy. This work aims to study several key questions related to multimodal learning through the lens of adversarial noises: 1) The trade-off between early/middle/late fusion affecting its robustness and accuracy 2) How does different frequency/time domain features contribute to the robustness? 3) How does different neural modules contribute against the adversarial noise? In our experiment, we construct adversarial examples to attack state-of-the-art neural models trained on Google AudioSet.[1](1) We compare how much attack potency in terms of adversarial perturbation of size epsilon using different L-p norms we would need to "deactivate" the victim model. Using adversarial noise to ablate multimodal models, we are able to provide insights into what is the best potential fusion strategy to balance the model parameters/accuracy and robustness trade-off, and distinguish the robust features versus the non-robust features that various neural networks model tend to learn.
更多
查看译文
关键词
event,audio-visual
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要