WorDepth: Variational Language Prior for Monocular Depth Estimation
CVPR 2024(2024)
摘要
Three-dimensional (3D) reconstruction from a single image is an ill-posed
problem with inherent ambiguities, i.e. scale. Predicting a 3D scene from text
description(s) is similarly ill-posed, i.e. spatial arrangements of objects
described. We investigate the question of whether two inherently ambiguous
modalities can be used in conjunction to produce metric-scaled reconstructions.
To test this, we focus on monocular depth estimation, the problem of predicting
a dense depth map from a single image, but with an additional text caption
describing the scene. To this end, we begin by encoding the text caption as a
mean and standard deviation; using a variational framework, we learn the
distribution of the plausible metric reconstructions of 3D scenes corresponding
to the text captions as a prior. To "select" a specific reconstruction or depth
map, we encode the given image through a conditional sampler that samples from
the latent space of the variational text encoder, which is then decoded to the
output depth map. Our approach is trained alternatingly between the text and
image branches: in one optimization step, we predict the mean and standard
deviation from the text description and sample from a standard Gaussian, and in
the other, we sample using a (image) conditional sampler. Once trained, we
directly predict depth from the encoded text using the conditional sampler. We
demonstrate our approach on indoor (NYUv2) and outdoor (KITTI) scenarios, where
we show that language can consistently improve performance in both.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要