GOV-NeSF: Generalizable Open-Vocabulary Neural Semantic Fields
CVPR 2024(2024)
摘要
Recent advancements in vision-language foundation models have significantly
enhanced open-vocabulary 3D scene understanding. However, the generalizability
of existing methods is constrained due to their framework designs and their
reliance on 3D data. We address this limitation by introducing Generalizable
Open-Vocabulary Neural Semantic Fields (GOV-NeSF), a novel approach offering a
generalizable implicit representation of 3D scenes with open-vocabulary
semantics. We aggregate the geometry-aware features using a cost volume, and
propose a Multi-view Joint Fusion module to aggregate multi-view features
through a cross-view attention mechanism, which effectively predicts
view-specific blending weights for both colors and open-vocabulary features.
Remarkably, our GOV-NeSF exhibits state-of-the-art performance in both 2D and
3D open-vocabulary semantic segmentation, eliminating the need for ground truth
semantic labels or depth priors, and effectively generalize across scenes and
datasets without fine-tuning.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要