GenAttack: practical black-box attacks with gradient-free optimization
GECCO, pp. 1111-1119, 2019.
Deep neural networks are vulnerable to adversarial examples, even in the black-box setting, where the attacker is restricted solely to query access. Existing black-box approaches to generating adversarial examples typically require a significant number of queries, either for training a substitute network or performing gradient estimatio...More
PPT (Upload PPT)