Targeted Attacks on Deep Reinforcement Learning Agents through Adversarial Observations

Léonard Hussenot
Léonard Hussenot

arXiv: Learning, 2019.

Cited by: 0|Views15
EI

Abstract:

This paper deals with adversarial attacks on perceptions of neural network policies in the Reinforcement Learning (RL) context. While previous approaches perform untargeted attacks on the state of the agent, we propose a method to perform targeted attacks to lure an agent into consistently following a desired policy. We place ourselves ...More

Code:

Data:

Your rating :
0

 

Tags
Comments