A Reinforcement Learning Method for Motion Control With Constraints on an HPN Arm

IEEE ROBOTICS AND AUTOMATION LETTERS(2022)

引用 0|浏览22
暂无评分
摘要
Soft robotic arms have shown great potential toward applications to human daily lives, which is mainly due to their infinite passive degrees of freedom and intrinsic safety. There are tasks in lives that require the motion of the robot to meet some certain pose constraints that have not been implemented through the soft arm, like delivering a glass of water. Because the workspace of the soft arm is affected by the loads or interaction, it is difficult to implement this task through the motion planning method. In this letter, we propose a Q-learning based approach to address the problem, directly achieving motion control with constraints under loads and interaction without planning. We first generate a controller for the soft arm based on Q-learning, which can operate the arm while satisfying the pose constraints when the arm is neither loaded nor interacted with the environment. Then, we introduce a process that adjusts corresponding Q values in the controller, which allows the controller to operate the arm with an unknown load or interaction while still satisfying the pose constraints. We implement the approach on our soft arm, i.e., the Honeycomb Pneumatic Network (HPN) Arm. The experiments show that the approach is effective, even when the arm reached an untrained situation or even beyond the workspace under the interaction.
更多
查看译文
关键词
Machine learning for robot control, modeling, control, learning for soft robots, soft robot applications
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要