Explaining robot policies

user-613ea93de55422cecdace10f(2021)

引用 1|浏览21
暂无评分
摘要
In order to interact with a robot or make wise decisions about where and how to deploy it in the real world, humans need to have an accurate mental model of how the robot acts in different situations. We propose to improve users' mental model of a robot by showing them examples of how the robot behaves in informative scenarios. We explore this in two settings. First, we show that when there are many possible environment states, users can more quickly understand the robot's policy if they are shown critical states where taking a particular action is important. Second, we show that when there is a distribution shift between training and test environment distributions, then it is more effective to show exploratory states that the robot does not visit naturally. We propose to improve users' mental model of a robot by showing them examples of how the robot behaves in informative scenarios. First, we find “critical states” where the agent takes important actions, and secondly we find “counterfactual states” outside the training distribution which show the agent's behavior in more diverse settings.
更多
查看译文
关键词
deep reinforcement learning,explainable artificial intelligence,transparency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要