Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

IEEE International Conference on Robotics and Automation(2022)

引用 63|浏览162
暂无评分
摘要
Legged robots are physically capable of traversing a wide range of challenging environments, but designing controllers that are sufficiently robust to handle this diversity has been a long-standing challenge in robotics. Reinforcement learning presents an appealing approach for automating the controller design process and has been able to produce remarkably robust controllers when trained in a suitable range of environments. However, it is difficult to predict all likely conditions the robot will encounter during deployment and enumerate them at training-time. What if instead of training controllers that are robust enough to handle any eventuality, we enable the robot to continually learn in any setting it finds itself in? This kind of real-world reinforcement learning poses a number of challenges, including efficiency, safety, and autonomy. To address these challenges, we propose a practical robot reinforcement learning system for fine-tuning locomotion policies in the real world. We demonstrate that a modest amount of real-world training can substantially improve performance during deployment, and this enables a real A1 quadrupedal robot to autonomously fine-tune multiple locomotion skills in a range of environments, including an outdoor lawn and a variety of indoor terrains. (Videos and code 1 1 https://sites.google.com/berkele.edu/fine-tuning-locomotion)
更多
查看译文
关键词
fine-tuning locomotion policies,real-world training,A1 quadrupedal robot,fine-tune multiple locomotion skills,legged robots,robotics,appealing approach,controller design process,remarkably robust controllers,training-time,training controllers,real-world reinforcement learning,practical robot reinforcement learning system
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要