Learning Robotic Manipulation Tasks through Visual Planning

arxiv(2021)

引用 0|浏览2
暂无评分
摘要
Multi-step manipulation tasks in unstructured environments are extremely challenging for a robot to learn. Such tasks interlace high-level reasoning that consists of the expected states that can be attained to achieve an overall task and low-level reasoning that decides what actions will yield these states. We propose a model-free deep reinforcement learning method to learn these multi-step manipulation tasks. We introduce a Robotic Manipulation Network (RoManNet) which is a vision-based deep reinforcement learning algorithm to learn the action-value functions and project manipulation action candidates. We define a Task Progress based Gaussian (TPG) reward function that computes the reward based on actions that lead to successful motion primitives and progress towards the overall task goal. We further introduce a Loss Adjusted Exploration (LAE) policy that determines actions from the action candidates according to the Boltzmann distribution of loss estimates. We demonstrate the effectiveness of our approaches by training RoManNet to learn several challenging multi-step robotic manipulation tasks. Empirical results show that our method outperforms the existing methods and achieves state-of-the-art results. The ablation studies show that TPG and LAE are especially beneficial for tasks like multiple block stacking. Code is available at: https://github.com/skumra/romannet
更多
查看译文
关键词
robotic manipulation tasks,visual planning,learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要