Defending Against Model Stealing Attacks With Adaptive Misinformation

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2020)

引用 85|浏览152
暂无评分
摘要
Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which allows a data-limited adversary with no knowledge of the training dataset to clone the functionality of a target model, just by using black-box query access. Such attacks are typically carried out by querying the target model using inputs that are synthetically generated or sampled from a surrogate dataset to construct a labeled dataset. The adversary can use this labeled dataset to train a clone model, which achieves a classification accuracy comparable to that of the target model. We propose "Adaptive Misinformation" to defend against such model stealing attacks. We identify that all existing model stealing attacks invariably query the target model with Out-Of-Distribution (OOD) inputs. By selectively sending incorrect predictions for OOD queries, our defense substantially degrades the accuracy of the attacker's clone model (by up to 40%), while minimally impacting the accuracy (<; 0.5%) for benign users. Compared to existing defenses, our defense has a significantly better security vs accuracy trade-off and incurs minimal computational overhead.
更多
查看译文
关键词
model stealing attacks,Adaptive Misinformation,training dataset,black-box query access,labeled dataset,clone model,attacker,OOD queries,out-of-distribution inputs,deep neural networks,attacker clone model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要