Understanding Everyday Hands In Action From Rgb-D Images

2015 IEEE International Conference on Computer Vision (ICCV)(2015)

引用 174|浏览107
暂无评分
摘要
We analyze functional manipulations of handheld objects, formalizing the problem as one of fine-grained grasp classification. To do so, we make use of a recently developed fine-grained taxonomy of human-object grasps. We introduce a large dataset of 12000 RGB-D images covering 71 everyday grasps in natural interactions. Our dataset is different from past work (typically addressed from a robotics perspective) in terms of its scale, diversity, and combination of RGB and depth data. From a computer-vision perspective, our dataset allows for exploration of contact and force prediction (crucial concepts in functional grasp analysis) from perceptual cues. We present extensive experimental results with state-of-the-art baselines, illustrating the role of segmentation, object context, and 3D-understanding in functional grasp analysis. We demonstrate a near 2X improvement over prior work and a naive deep baseline, while pointing out important directions for improvement.
更多
查看译文
关键词
RGB-D image,functional manipulation,handheld object,fine-grained grasp classification,fine-grained taxonomy,human-object grasp,natural interaction,computer-vision perspective,force prediction,functional grasp analysis,image segmentation,object context,3D-understanding
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要