Motion Planning From Demonstrations And Polynomial Optimization For Visual Servoing Applications

2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)(2013)

引用 7|浏览16
暂无评分
摘要
Vision feedback control techniques are desirable for a wide range of robotics applications due to their robustness to image noise and modeling errors. However in the case of a robot-mounted camera, they encounter difficulties when the camera traverses large displacements. This scenario necessitates continuous visual target feedback during the robot motion, while simultaneously considering the robot's self-and external-constraints. Herein, we propose to combine workspace (Cartesian space) path-planning with robot teach-by-demonstration to address the visibility constraint, joint limits and "whole arm" collision avoidance for vision-based control of a robot manipulator. User demonstration data generates safe regions for robot motion with respect to joint limits and potential "whole arm" collisions. Our algorithm uses these safe regions to generate new feasible trajectories under a visibility constraint that achieves the desired view of the target (e.g., a pre-grasping location) in new, undemonstrated locations. Experiments with a 7-DOF articulated arm validate the proposed method.
更多
查看译文
关键词
polynomials,feedback,visual servoing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要