Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
arxiv(2023)
摘要
We investigate whether Deep Reinforcement Learning (Deep RL) is able to
synthesize sophisticated and safe movement skills for a low-cost, miniature
humanoid robot that can be composed into complex behavioral strategies in
dynamic environments. We used Deep RL to train a humanoid robot with 20
actuated joints to play a simplified one-versus-one (1v1) soccer game. The
resulting agent exhibits robust and dynamic movement skills such as rapid fall
recovery, walking, turning, kicking and more; and it transitions between them
in a smooth, stable, and efficient manner. The agent's locomotion and tactical
behavior adapts to specific game contexts in a way that would be impractical to
manually design. The agent also developed a basic strategic understanding of
the game, and learned, for instance, to anticipate ball movements and to block
opponent shots. Our agent was trained in simulation and transferred to real
robots zero-shot. We found that a combination of sufficiently high-frequency
control, targeted dynamics randomization, and perturbations during training in
simulation enabled good-quality transfer. Although the robots are inherently
fragile, basic regularization of the behavior during training led the robots to
learn safe and effective movements while still performing in a dynamic and
agile way – well beyond what is intuitively expected from the robot. Indeed,
in experiments, they walked 181
to get up, and kicked a ball 34
efficiently combining the skills to achieve the longer term objectives.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要