Instance Brownian Bridge as Texts for Open-vocabulary Video Instance Segmentation
CoRR(2024)
摘要
Temporally locating objects with arbitrary class texts is the primary pursuit
of open-vocabulary Video Instance Segmentation (VIS). Because of the
insufficient vocabulary of video data, previous methods leverage image-text
pretraining model for recognizing object instances by separately aligning each
frame and class texts, ignoring the correlation between frames. As a result,
the separation breaks the instance movement context of videos, causing inferior
alignment between video and text. To tackle this issue, we propose to link
frame-level instance representations as a Brownian Bridge to model instance
dynamics and align bridge-level instance representation to class texts for more
precisely open-vocabulary VIS (BriVIS). Specifically, we build our system upon
a frozen video segmentor to generate frame-level instance queries, and design
Temporal Instance Resampler (TIR) to generate queries with temporal context
from frame queries. To mold instance queries to follow Brownian bridge and
accomplish alignment with class texts, we design Bridge-Text Alignment (BTA) to
learn discriminative bridge-level representations of instances via contrastive
objectives. Setting MinVIS as the basic video segmentor, BriVIS surpasses the
Open-vocabulary SOTA (OV2Seg) by a clear margin. For example, on the
challenging large-vocabulary VIS dataset (BURST), BriVIS achieves 7.43 mAP and
exhibits 49.49
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要