A Practical Clean -Label Backdoor Attack with Limited Information in Vertical Federated Learning

Peng Chen, Jirui Yang,Junxiong Lin,Zhihui Lu,Qiang Duan, Hongfeng Chai

23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING, ICDM 2023(2023)

引用 0|浏览3
暂无评分
摘要
Vertical Federated Learning (VFL) facilitates collaboration on model training among multiple parties, each owning partitioned features of the distributed dataset. Although backdoor attacks have been found as one of the main threats to FL security, research on backdoor attacks in VFL is still in the infant stage. Existing methods for NIT backdoor attacks rely on predicting sample pseudo-labels using approaches such as label inference, which require substantial additional information not readily available in practical FL scenarios. To evaluate the practical vulnerability of VFL to backdoor attacks, we present a target-efficient clean backdoor (TECB) attack for VFL. The TECB approach consists of two phases i) Clean Backdoor Poisoning (CBP) and Target Gradient Alignment. (TGA). In the CBP phase, the adversary trains a backdoor trigger and poisons the model during VFL training. The poisoned model is further fine-tuned in the TGA phase to enhance its efficacy in complex multi -classification tasks. Compared to the existing methods, the proposed TECB achieves a highly effective backdoor attack with very limited information about the target class samples, which is more practical in typical VII settings. Experimental results verify the superior performance of TECH, achieving above 97% attack success rate (ASR) on three widely used datasets (CIFARIO, CIFAR100, and CINIC-10) with only 0.1% of target labels known, which outperforms the state-of-the-art attack methods. This study uncovers the potential backdoor risks in VFL, enabling the development of secure VFL applications in areas like finance, healthcare, and beyond. Source code is available at: h ttps://gi thub.com/13 thDa y0fLuna r Ma y/TECB- a flack
更多
查看译文
关键词
Vertical Federated Learning,Backdoor Attack,Financial Artificial Intelligence Security,Clean Backdoor Poisoning,Target Gradient Alignment.
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要