AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We introduced an attack, based on a novel substitute training algorithm using synthetic data generation, to craft adversarial examples misclassified by black-box deep neural networks

Practical Black-Box Attacks against Machine Learning.

AsiaCCS, pp.506-519, (2017)

Cited by: 2041|Views428
EI

Abstract

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vulnerable to adversarial examples: malicious inputs modified to yield erroneous model outputs, while appearing unmodified to human observers. Potential attacks include having malicious content like malware identified as legitimate or controlling vehicle behavior. Yet, a...More

Code:

Data:

Funding
  • • A second oracle trained locally with the German Traffic Signs Recognition Benchmark (GTSRB) [13], can be forced to misclassify more than 64.24% of altered inputs without affecting human recognition
  • With success rates higher than 98.98% and transferability rates ranging from 64.24% to 69.03% for ε = 0.3, which is hard to distinguish for humans, the attack is successful
  • We cannot improve the accuracy due to the automated nature of training
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科