Physical Black-Box Adversarial Attacks Through Transformations

IEEE Transactions on Big Data(2023)

Cited 0|Views67
No score
Abstract
Deep learning has shown impressive performance in numerous applications. However, recent studies have found that deep learning models are vulnerable to adversarial attacks, where the attacker adds imperceptible perturbations into benign samples to induce misclassifications. Adversarial attacks in the digital domain focus on constructing imperceptible perturbations. However, they are always less effective in the physical world because the perturbations may be destroyed when captured by the camera. Most physical adversarial attacks require adding invisible adversarial features (e.g., a sticker or a laser) to the target object, which may be noticed by human eyes. In this work, we propose to employ image transformation to generate more natural adversarial samples in the physical world. Concretely, we propose two attack algorithms to satisfy different attack goals: Efficient-AATR employs a greedy strategy to generate adversarial samples with fewer queries; Effective-AATR employs an adaptive particle swarm optimization algorithm to search for the most effective adversarial samples within the given the number of queries. Extensive experiments demonstrate the superiority of our attacks compared with state-of-the-art adversarial attacks under mainstream defenses.
More
Translated text
Key words
Black-box attack,deep learning,physical adversarial attack
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined