Leveraging Photometric Consistency Over Time for Sparsely Supervised Hand-Object Reconstruction

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2020)

引用 157|浏览297
暂无评分
摘要
Modeling hand-object manipulations is essential for understanding how humans interact with their environment. While of practical importance, estimating the pose of hands and objects during interactions is challenging due to the large mutual occlusions that occur during manipulation. Recent efforts have been directed towards fully-supervised methods that require large amounts of labeled training samples. Collecting 3D ground-truth data for hand-object interactions, however, is costly, tedious, and error-prone. To overcome this challenge we present a method to leverage photometric consistency across time when annotations are only available for a sparse subset of frames in a video. Our model is trained end-to-end on color images to jointly reconstruct hands and objects in 3D by inferring their poses. Given our estimated reconstructions, we differentiably render the optical flow between pairs of adjacent images and use it within the network to warp one frame to another. We then apply a self-supervised photometric loss that relies on the visual consistency between nearby images. We achieve state-of-the-art results on 3D hand-object reconstruction benchmarks and demonstrate that our approach allows us to improve the pose estimation accuracy by leveraging information from neighboring frames in low-data regimes.
更多
查看译文
关键词
fully-supervised methods,labeled training samples,ground-truth data,hand-object interactions,photometric consistency,sparse subset,estimated reconstructions,self-supervised photometric loss,visual consistency,3D hand-object reconstruction benchmarks,pose estimation accuracy,sparsely supervised hand-object reconstruction,hand-object manipulation modeling,mutual occlusions,3D ground-truth data collection,color images,optical flow,adjacent images,neighboring frames,low-data regimes
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要