Uncertainty Aware Learning from Demonstrations in Multiple Contexts using Bayesian Neural Networks

2019 International Conference on Robotics and Automation (ICRA)(2019)

引用 22|浏览150
暂无评分
摘要
Diversity of environments is a key challenge that causes learned robotic controllers to fail due to the discrepancies between the training and evaluation conditions. Training from demonstrations in various conditions can mitigate - but not completely prevent - such failures. Learned controllers such as neural networks typically do not have a notion of uncertainty that allows to diagnose an offset between training and testing conditions, and potentially intervene. In this work, we propose to use Bayesian Neural Networks, which have such a notion of uncertainty. We show that uncertainty can be leveraged to consistently detect situations in high-dimensional simulated and real robotic domains in which the performance of the learned controller would be sub-par. Also, we show that such an uncertainty based solution allows making an informed decision about when to invoke a fallback strategy. One fallback strategy is to request more data. We empirically show that providing data only when requested results in increased data-efficiency.
更多
查看译文
关键词
Bayesian neural networks,robotic controllers,evaluation conditions,learned controller,testing conditions,high-dimensional simulated domains,real robotic domains,uncertainty based solution,uncertainty aware learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要