GenAttack: practical black-box attacks with gradient-free optimization

GECCO, pp. 1111-1119, 2019.

Cited by: 49|Views203
EI

Abstract:

Deep neural networks are vulnerable to adversarial examples, even in the black-box setting, where the attacker is restricted solely to query access. Existing black-box approaches to generating adversarial examples typically require a significant number of queries, either for training a substitute network or performing gradient estimatio...More

Code:

Data:

Your rating :
0

 

Tags
Comments