Visual Articulated Tracking In The Presence Of Occlusions

2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)(2018)

引用 6|浏览55
暂无评分
摘要
This paper focuses on visual tracking of a robotic manipulator during manipulation. In this situation, tracking is prone to failure when visual distractions are created by the object being manipulated and the clutter in the environment. Current state-of-the-art approaches, which typically rely on model-fitting using Iterative Closest Point (ICY), fail in the presence of distracting data points and are unable to recover. Meanwhile, discriminative methods which are trained only to distinguish parts of the tracked object can also fail in these scenarios as data points from the occlusions are incorrectly classified as being from the manipulator. We instead propose to use the per-pixel data-to-model associations provided from a random forest to avoid local minima during model fitting. By training the random forest with artificial occlusions we can achieve increased robustness to occlusion and clutter present in the scene. We do this without specific knowledge about the type or location of the manipulated object. Our approach is demonstrated by using dense depth data from an RGB-D camera to track a robotic manipulator during manipulation and in presence of occlusions.
更多
查看译文
关键词
RGB-D camera,manipulated object,per-pixel data-to-model associations,tracked object,Iterative Closest Point,visual articulated tracking,robotic manipulator
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要