Navigation as Attackers Wish? Towards Building Robust Embodied Agents under Federated Learning
arxiv(2022)
摘要
Federated embodied agent learning protects the data privacy of individual
visual environments by keeping data locally at each client (the individual
environment) during training. However, since the local data is inaccessible to
the server under federated learning, attackers may easily poison the training
data of the local client to build a backdoor in the agent without notice.
Deploying such an agent raises the risk of potential harm to humans, as the
attackers may easily navigate and control the agent as they wish via the
backdoor. Towards Byzantine-robust federated embodied agent learning, in this
paper, we study the attack and defense for the task of vision-and-language
navigation (VLN), where the agent is required to follow natural language
instructions to navigate indoor environments. First, we introduce a simple but
effective attack strategy, Navigation as Wish (NAW), in which the malicious
client manipulates local trajectory data to implant a backdoor into the global
model. Results on two VLN datasets (R2R and RxR) show that NAW can easily
navigate the deployed VLN agent regardless of the language instruction, without
affecting its performance on normal test sets. Then, we propose a new
Prompt-Based Aggregation (PBA) to defend against the NAW attack in federated
VLN, which provides the server with a ”prompt” of the vision-and-language
alignment variance between the benign and malicious clients so that they can be
distinguished during training. We validate the effectiveness of the PBA method
on protecting the global model from the NAW attack, which outperforms other
state-of-the-art defense methods by a large margin in the defense metrics on
R2R and RxR.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要