Deep Predictive Policy Training using Reinforcement Learning

2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2017)

引用 128|浏览121
暂无评分
摘要
Skilled robot task learning is best implemented by predictive action policies due to the inherent latency of sensorimotor processes. However, training such predictive policies is challenging as it involves finding a trajectory of motor activations for the full duration of the action. We propose a data-efficient deep predictive policy training (DPPT) framework with a deep neural network policy architecture which maps an image observation to a sequence of motor activations. The architecture consists of three sub-networks referred to as the perception, policy and behavior super-layers. The perception and behavior super-layers force an abstraction of visual and motor data trained with synthetic and simulated training samples, respectively. The policy super-layer is a small sub-network with fewer parameters that maps data in-between the abstracted manifolds. It is trained for each task using methods for policy search reinforcement learning. We demonstrate the suitability of the proposed architecture and learning framework by training predictive policies for skilled object grasping and ball throwing on a PR2 robot. The effectiveness of the method is illustrated by the fact that these tasks are trained using only about 180 real robot attempts with qualitative terminal rewards.
更多
查看译文
关键词
skilled robot task learning,predictive action policies,motor activations,data-efficient deep predictive policy training framework,deep neural network policy architecture,behavior super-layers,visual motor data,learning framework,policy search reinforcement learning,policy super-layer,simulated training samples,synthetic training samples
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要