Total-Decom: Decomposed 3D Scene Reconstruction with Minimal Interaction
CVPR 2024(2024)
摘要
Scene reconstruction from multi-view images is a fundamental problem in
computer vision and graphics. Recent neural implicit surface reconstruction
methods have achieved high-quality results; however, editing and manipulating
the 3D geometry of reconstructed scenes remains challenging due to the absence
of naturally decomposed object entities and complex object/background
compositions. In this paper, we present Total-Decom, a novel method for
decomposed 3D reconstruction with minimal human interaction. Our approach
seamlessly integrates the Segment Anything Model (SAM) with hybrid
implicit-explicit neural surface representations and a mesh-based
region-growing technique for accurate 3D object decomposition. Total-Decom
requires minimal human annotations while providing users with real-time control
over the granularity and quality of decomposition. We extensively evaluate our
method on benchmark datasets and demonstrate its potential for downstream
applications, such as animation and scene editing. The code is available at
\href{https://github.com/CVMI-Lab/Total-Decom.git}{https://github.com/CVMI-Lab/Total-Decom.git}.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要