A Unified Framework for Adversarial Attacks on Multi-Source Domain Adaptation

IEEE Transactions on Knowledge and Data Engineering(2023)

引用 1|浏览14
暂无评分
摘要
Multi-source domain adaptation studies the knowledge transferability from multiple labeled source domains to an unlabeled target domain under a distribution shift. However, little effort has been devoted to studying the adversarial vulnerability of multi-source domain adaptation approaches. Specifically, most existing techniques focus on learning the domain-invariant representation to mitigate the distribution shift across domains. In this paper, we theoretically show that the domain-invariant representation cannot guarantee the success of multi-source domain adaptation, when no labeled samples are available in the target domain. This result motivates us to propose a unified framework ( AdaptAttack ) for data poisoning adversarial attacks on multi-source domain adaptation. The key idea is to maliciously manipulate the label-informed data distributions of source domains by injecting perceptibly unnoticeable noise into the source data. In addition, it requires that the generated adversarial attacks are invisible to multi-source domain adaptation algorithms, i.e., the source classification errors and marginal discrepancies across domains are not negatively affected. Extensive experiments on public domain adaptation benchmarks confirm the effectiveness and computational efficiency of our proposed AdaptAttack framework in both white-box and black-box attack scenarios.
更多
查看译文
关键词
Adversarial attacks,multi-source domain adaptation,domain discrepancy
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要