Vanishing-Point-Guided Video Semantic Segmentation of Driving Scenes
CoRR(2024)
摘要
The estimation of implicit cross-frame correspondences and the high
computational cost have long been major challenges in video semantic
segmentation (VSS) for driving scenes. Prior works utilize keyframes, feature
propagation, or cross-frame attention to address these issues. By contrast, we
are the first to harness vanishing point (VP) priors for more effective
segmentation. Intuitively, objects near VPs (i.e., away from the vehicle) are
less discernible. Moreover, they tend to move radially away from the VP over
time in the usual case of a forward-facing camera, a straight road, and linear
forward motion of the vehicle. Our novel, efficient network for VSS, named
VPSeg, incorporates two modules that utilize exactly this pair of static and
dynamic VP priors: sparse-to-dense feature mining (DenseVP) and VP-guided
motion fusion (MotionVP). MotionVP employs VP-guided motion estimation to
establish explicit correspondences across frames and help attend to the most
relevant features from neighboring frames, while DenseVP enhances weak dynamic
features in distant regions around VPs. These modules operate within a
context-detail framework, which separates contextual features from
high-resolution local features at different input resolutions to reduce
computational costs. Contextual and local features are integrated through
contextualized motion attention (CMA) for the final prediction. Extensive
experiments on two popular driving segmentation benchmarks, Cityscapes and
ACDC, demonstrate that VPSeg outperforms previous SOTA methods, with only
modest computational overhead.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要