Self-learning Canonical Space for Multi-view 3D Human Pose Estimation
arxiv(2024)
摘要
Multi-view 3D human pose estimation is naturally superior to single view one,
benefiting from more comprehensive information provided by images of multiple
views. The information includes camera poses, 2D/3D human poses, and 3D
geometry. However, the accurate annotation of these information is hard to
obtain, making it challenging to predict accurate 3D human pose from multi-view
images. To deal with this issue, we propose a fully self-supervised framework,
named cascaded multi-view aggregating network (CMANet), to construct a
canonical parameter space to holistically integrate and exploit multi-view
information. In our framework, the multi-view information is grouped into two
categories: 1) intra-view information , 2) inter-view information. Accordingly,
CMANet consists of two components: intra-view module (IRV) and inter-view
module (IEV). IRV is used for extracting initial camera pose and 3D human pose
of each view; IEV is to fuse complementary pose information and cross-view 3D
geometry for a final 3D human pose. To facilitate the aggregation of the intra-
and inter-view, we define a canonical parameter space, depicted by per-view
camera pose and human pose and shape parameters (θ and β) of SMPL
model, and propose a two-stage learning procedure. At first stage, IRV learns
to estimate camera pose and view-dependent 3D human pose supervised by
confident output of an off-the-shelf 2D keypoint detector. At second stage, IRV
is frozen and IEV further refines the camera pose and optimizes the 3D human
pose by implicitly encoding the cross-view complement and 3D geometry
constraint, achieved by jointly fitting predicted multi-view 2D keypoints. The
proposed framework, modules, and learning strategy are demonstrated to be
effective by comprehensive experiments and CMANet is superior to
state-of-the-art methods in extensive quantitative and qualitative analysis.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要