Targeted Attacks on Deep Reinforcement Learning Agents through Adversarial Observations
arXiv: Learning, 2019.
This paper deals with adversarial attacks on perceptions of neural network policies in the Reinforcement Learning (RL) context. While previous approaches perform untargeted attacks on the state of the agent, we propose a method to perform targeted attacks to lure an agent into consistently following a desired policy. We place ourselves ...More
PPT (Upload PPT)