GraphDreamer: Compositional 3D Scene Synthesis from Scene Graphs
CVPR 2024(2023)
摘要
As pretrained text-to-image diffusion models become increasingly powerful,
recent efforts have been made to distill knowledge from these text-to-image
pretrained models for optimizing a text-guided 3D model. Most of the existing
methods generate a holistic 3D model from a plain text input. This can be
problematic when the text describes a complex scene with multiple objects,
because the vectorized text embeddings are inherently unable to capture a
complex description with multiple entities and relationships. Holistic 3D
modeling of the entire scene further prevents accurate grounding of text
entities and concepts. To address this limitation, we propose GraphDreamer, a
novel framework to generate compositional 3D scenes from scene graphs, where
objects are represented as nodes and their interactions as edges. By exploiting
node and edge information in scene graphs, our method makes better use of the
pretrained text-to-image diffusion model and is able to fully disentangle
different objects without image-level supervision. To facilitate modeling of
object-wise relationships, we use signed distance fields as representation and
impose a constraint to avoid inter-penetration of objects. To avoid manual
scene graph creation, we design a text prompt for ChatGPT to generate scene
graphs based on text inputs. We conduct both qualitative and quantitative
experiments to validate the effectiveness of GraphDreamer in generating
high-fidelity compositional 3D scenes with disentangled object entities.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要