BigHand2.2M Benchmark: Hand Pose Dataset and State of the Art Analysis

2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2017)

引用 282|浏览100
暂无评分
摘要
In this paper we introduce a large-scale hand pose dataset, collected using a novel capture method. Existing datasets are either generated synthetically or captured using depth sensors: synthetic datasets exhibit a certain level of appearance difference from real depth images, and real datasets are limited in quantity and coverage, mainly due to the difficulty to annotate them. We propose a tracking system with six 6D magnetic sensors and inverse kinematics to automatically obtain 21-joints hand pose annotations of depth maps captured with minimal restriction on the range of motion. The capture protocol aims to fully cover the natural hand pose space. As shown in embedding plots, the new dataset exhibits a significantly wider and denser range of hand poses compared to existing benchmarks. Current state-of-the-art methods are evaluated on the dataset, and we demonstrate significant improvements in cross-benchmark performance. We also show significant improvements in egocentric hand pose estimation with a CNN trained on the new dataset.
更多
查看译文
关键词
depth sensors,synthetic datasets,inverse kinematics,capture protocol,natural hand,hand poses,egocentric hand,BigHand2.2M Benchmark,capture method,large-scale hand pose dataset,tracking system,6D magnetic sensors,egocentric hand pose estimation,CNN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要