Learning Modular Neural Network Policies for Multi-Task and Multi-Robot Transfer

2017 IEEE International Conference on Robotics and Automation (ICRA)(2016)

引用 410|浏览255
暂无评分
摘要
Reinforcement learning (RL) can automate a wide variety of robotic skills, but learning each new skill requires considerable real-world data collection and manual representation engineering to design policy classes or features. Using deep reinforcement learning to train general purpose neural network policies alleviates some of the burden of manual representation engineering by using expressive policy classes, but exacerbates the challenge of data collection, since such methods tend to be less efficient than RL with low-dimensional, hand-designed representations. Transfer learning can mitigate this problem by enabling us to transfer information from one skill to another and even from one robot to another. We show that neural network policies can be decomposed into "task-specific" and "robot-specific" modules, where the task-specific modules are shared across robots, and the robot-specific modules are shared across all tasks on that robot. This allows for sharing task information, such as perception, between robots and sharing robot information, such as dynamics and kinematics, between tasks. We exploit this decomposition to train mix-and-match modules that can solve new robot-task combinations that were not seen during training. Using a novel neural network architecture, we demonstrate the effectiveness of our transfer method for enabling zero-shot generalization with a variety of robots and tasks in simulation for both visual and non-visual tasks.
更多
查看译文
关键词
modular neural network policy learning,multitask multirobot transfer learning,robotic skills,deep reinforcement learning,general purpose modular neural network policy training,RL,task-specific modules,robot-specific modules,robot dynamics,robot kinematics,mix-and-match modules,zero-shot generalization,visual tasks,nonvisual tasks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要