ComboVerse: Compositional 3D Assets Creation Using Spatially-Aware Diffusion Guidance
arxiv(2024)
摘要
Generating high-quality 3D assets from a given image is highly desirable in
various applications such as AR/VR. Recent advances in single-image 3D
generation explore feed-forward models that learn to infer the 3D model of an
object without optimization. Though promising results have been achieved in
single object generation, these methods often struggle to model complex 3D
assets that inherently contain multiple objects. In this work, we present
ComboVerse, a 3D generation framework that produces high-quality 3D assets with
complex compositions by learning to combine multiple models. 1) We first
perform an in-depth analysis of this “multi-object gap” from both model and
data perspectives. 2) Next, with reconstructed 3D models of different objects,
we seek to adjust their sizes, rotation angles, and locations to create a 3D
asset that matches the given image. 3) To automate this process, we apply
spatially-aware score distillation sampling (SSDS) from pretrained diffusion
models to guide the positioning of objects. Our proposed framework emphasizes
spatial alignment of objects, compared with standard score distillation
sampling, and thus achieves more accurate results. Extensive experiments
validate ComboVerse achieves clear improvements over existing methods in
generating compositional 3D assets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要