Stealing Deep Reinforcement Learning Models for Fun and Profit

Chen Kangjie
Chen Kangjie
Zhang Tianwei
Zhang Tianwei
Cited by: 0|Bibtex|Views6|Links

Abstract:

In this paper, we present the first attack methodology to extract black-box Deep Reinforcement Learning (DRL) models only from their actions with the environment. Model extraction attacks against supervised Deep Learning models have been widely studied. However, those techniques cannot be applied to the reinforcement learning scenario d...More

Code:

Data:

Your rating :
0

 

Tags
Comments