Open-world driving scene segmentation via multi-stage and multi-modality fusion of vision-language embedding

IV(2023)

Cited 0|Views6
No score
Abstract
In this study, a pixel-text level multi-stage multi-modality fusion segmentation method is proposed to make the open-world driving scene segmentation more efficient. It can be used for different semantic perceptual needs of autonomous driving scenarios for real-world driving situations. The method can finely segment unseen labels without additional corresponding semantic segmentation labels, only using the existing semantic segmentation data. The proposed method consists of 4 modules. A visual representation embedding module and a segmentation command embedding module are used to extract the driving scene and the segmentation category command. A multi-stage multi-modality fusion module is used to fuse the driving scene visual information and segmentation command text information for different sizes at the pixel-text level. Finally, a cascade segmentation head is used to ground the segmentation command text to the driving scene for encouraging the model to generate corresponding high-quality semantic segmentation results. In the experiment, we first verify the effectiveness of the method for zero-shot segmentation using a popular driving scene segmentation dataset. We also confirm the effectiveness of synonyms unseen label and hierarchy unseen label for the open-world semantic segmentation.
More
Translated text
Key words
Open-world segmentation, driving scene, pixel-text alignment, multi-stage multi-modality fusion
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined