Membership Inference Attacks Against Deep Learning Models via Logits Distribution

IEEE Transactions on Dependable and Secure Computing(2023)

引用 4|浏览4
暂无评分
摘要
Deep Learning(DL) techniques have gained significant importance in the recent past due to their vast applications. However, DL is still prone to several attacks, such as the Membership Inference Attack (MIA), based on the memorability of training data. MIA aims at determining the presence of specific data in the training dataset of the model with substitute model of similar structure to the objective model. As MIA relies on the substitute model, they can be mitigated if the substitute model is not clear about the network structure of the objective model. To solve the challenge of shadow-model construction, thiswork presents L-Leaks, a member inference attack based on Logits. L-Leaks allow an adversary to use the substitute model's information to predict the presence of membership if the shadow and objective model are similar enough. Here, the substitute model is built by learning the logits of the objective model, hence making it similar enough. This results in the substitute model having sufficient confidence in the member samples of the objective model. The evaluation of the attack's success shows that the proposed technique can execute the attack more accurately than existing techniques. It also shows that the proposed MIA is significantly robust under different network models and datasets.
更多
查看译文
关键词
Data models,Training,Deep learning,Computational modeling,Federated learning,Data privacy,Predictive models,Deep learning,membership inference attacks (MIAs),logits,substitute model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要