Usurp: universal single-source adversarial perturbations on multimodal emotion recognition

2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP(2023)

引用 0|浏览3
暂无评分
摘要
The field of affective computing has progressed from traditional unimodal analysis to more complex multimodal analysis due to the proliferation of videos posted online. Multimodal learning has shown remarkable performance in emotion recognition tasks, but its robustness in an adversarial setting remains unknown. This paper investigates the robustness of multimodal emotion recognition models against worst-case adversarial perturbations on a single modality. We found that standard multimodal models are susceptible to single-source adversaries and can be easily fooled by perturbations on any single modality. We draw some key observations that serve as guidelines for designing universal adversarial attacks on multimodal emotion recognition models. Motivated by these findings, we propose a novel universal single-source adversarial perturbations framework on multimodal emotion recognition models: USURP. Through our analysis of adversarial robustness, we demonstrate the necessity of studying adversarial attacks on multimodal models. Our experimental results show that the proposed USURP method achieves high attack success rates and significantly improves adversarial transferability in multimodal settings. The observations and novel attack methods presented in this paper provide a new understanding of the adversarial robustness of multimodal models, contributing to their safe and reliable deployment in more real-world scenarios.
更多
查看译文
关键词
adversarial attack,multimodal model,universal adversarial perturbation,single source adversaries,affective computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要