SoK: Adversarial Machine Learning Attacks and Defences in Multi-Agent Reinforcement Learning

arxiv(2023)

引用 0|浏览57
暂无评分
摘要
Multi-Agent Reinforcement Learning (MARL) is vulnerable to Adversarial Machine Learning (AML) attacks and needs adequate defences before it can be used in real world applications. We have conducted a survey into the use of execution-time AML attacks against MARL and the defences against those attacks. We surveyed related work in the application of AML in Deep Reinforcement Learning (DRL) and Multi-Agent Learning (MAL) to inform our analysis of AML for MARL. We propose a novel perspective to understand the manner of perpetrating an AML attack, by defining Attack Vectors. We develop two new frameworks to address a gap in current modelling frameworks, focusing on the means and tempo of an AML attack against MARL, and identify knowledge gaps and future avenues of research.
更多
查看译文
关键词
adversarial machine learning attacks,reinforcement,defences,multi-agent
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络