Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks

IEEE Transactions on Control Systems Technology(2022)

引用 4|浏览48
暂无评分
摘要
Convolutional and recurrent neural networks (RNNs) have been widely used to achieve state-of-the-art performance on classification tasks. However, it has also been noted that these networks can be manipulated adversarially with relative ease, by carefully crafted additive perturbations to the input. Though several experimentally established prior works exist on crafting and defending against attacks, it is also desirable to have rigorous theoretical analyses to illuminate conditions under which such adversarial inputs exist. This article provides both the theory and supporting experiments for real-time attacks. The focus is specifically on recurrent architectures and inspiration is drawn from dynamical systems’ theory to naturally cast this as a control problem, allowing dynamic computation of adversarial perturbations at each timestep of the input sequence, thus resembling a feedback controller. Illustrative examples are provided to supplement the theoretical discussions.
更多
查看译文
关键词
Adversarial examples,control synthesis,dynamical systems,recurrent neural network (RNN)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要