End-to-end Recovery of Human Shape and Pose

arXiv (Cornell University)(2018)

引用 1862|浏览498
暂无评分
摘要
We describe Human Mesh Recovery (HMR), an end-to-end framework for reconstructing a full 3D mesh of a human body from a single RGB image. In contrast to most current methods that compute 2D or 3D joint locations, we produce a richer and more useful mesh representation that is parameterized by shape and 3D joint angles. The main objective is to minimize the reprojection loss of keypoints, which allow our model to be trained using images in-the-wild that only have ground truth 2D annotations. However, the reprojection loss alone leaves the model highly under constrained. In this work we address this problem by introducing an adversary trained to tell whether a human body parameter is real or not using a large database of 3D human meshes. We show that HMR can be trained with and without using any paired 2D-to-3D supervision. We do not rely on intermediate 2D keypoint detections and infer 3D pose and shape parameters directly from image pixels. Our model runs in real-time given a bounding box containing the person. We demonstrate our approach on various images in-the-wild and out-perform previous optimization based methods that output 3D meshes and show competitive results on tasks such as 3D joint location estimation and part segmentation.
更多
查看译文
关键词
3D joint angles,reprojection loss,in-the-wild images,ground truth 2D annotations,human body shape,3D human meshes,HMR,intermediate 2D keypoint detections,shape parameters,image pixels,3D joint location estimation,mesh representation,optimization-based methods,RGB image,human pose recovery,human mesh recovery,human shape end-to-end recovery,3D pose parameter inference,part segmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要