SiMA-Hand: Boosting 3D Hand-Mesh Reconstruction by Single-to-Multi-View Adaptation
CoRR(2024)
摘要
Estimating 3D hand mesh from RGB images is a longstanding track, in which
occlusion is one of the most challenging problems. Existing attempts towards
this task often fail when the occlusion dominates the image space. In this
paper, we propose SiMA-Hand, aiming to boost the mesh reconstruction
performance by Single-to-Multi-view Adaptation. First, we design a multi-view
hand reconstructor to fuse information across multiple views by holistically
adopting feature fusion at image, joint, and vertex levels. Then, we introduce
a single-view hand reconstructor equipped with SiMA. Though taking only one
view as input at inference, the shape and orientation features in the
single-view reconstructor can be enriched by learning non-occluded knowledge
from the extra views at training, enhancing the reconstruction precision on the
occluded regions. We conduct experiments on the Dex-YCB and HanCo benchmarks
with challenging object- and self-caused occlusion cases, manifesting that
SiMA-Hand consistently achieves superior performance over the state of the
arts. Code will be released on https://github.com/JoyboyWang/SiMA-Hand Pytorch.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要