Fully Convolutional Grasp Detection Network with Oriented Anchor Box

2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2018)

引用 166|浏览105
暂无评分
摘要
In this paper, we present a real-time approach to predict multiple grasping poses for a parallel-plate robotic gripper using RGB images. A model with oriented anchor box mechanism is proposed and a new matching strategy is used during the training process. An end-to-end fully convolutional neural network is employed in our work. The network consists of two parts: the feature extractor and multi-grasp predictor. The feature extractor is a deep convolutional neural network. The multi-grasp predictor regresses grasp rectangles from predefined oriented rectangles, called oriented anchor boxes, and classifies the rectangles into graspable and ungraspable. On the standard Cornell Grasp Dataset, our model achieves an accuracy of 97.74% and 96.61% on image-wise split and object-wise split respectively, and outperforms the latest state-of-the-art approach by 1.74% on image-wise split and 0.51% on object-wise split.
更多
查看译文
关键词
parallel-plate robotic gripper,RGB images,oriented anchor box mechanism,matching strategy,end-to-end fully convolutional neural network,feature extractor,deep convolutional neural network,multigrasp predictor regresses,predefined oriented rectangles,anchor boxes,standard Cornell Grasp Dataset,image-wise split,object-wise split,latest state-of-the-art approach,grasping poses,convolutional grasp detection network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要