Point-VOS: Pointing Up Video Object Segmentation
CoRR(2024)
摘要
Current state-of-the-art Video Object Segmentation (VOS) methods rely on
dense per-object mask annotations both during training and testing. This
requires time-consuming and costly video annotation mechanisms. We propose a
novel Point-VOS task with a spatio-temporally sparse point-wise annotation
scheme that substantially reduces the annotation effort. We apply our
annotation scheme to two large-scale video datasets with text descriptions and
annotate over 19M points across 133K objects in 32K videos. Based on our
annotations, we propose a new Point-VOS benchmark, and a corresponding
point-based training mechanism, which we use to establish strong baseline
results. We show that existing VOS methods can easily be adapted to leverage
our point annotations during training, and can achieve results close to the
fully-supervised performance when trained on pseudo-masks generated from these
points. In addition, we show that our data can be used to improve models that
connect vision and language, by evaluating it on the Video Narrative Grounding
(VNG) task. We will make our code and annotations available at
https://pointvos.github.io.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要