BoxVIS: Video Instance Segmentation with Box Annotations

CoRR(2023)

引用 0|浏览36
暂无评分
摘要
It is expensive and labour-extensive to label the pixel-wise object masks in a video. As a results, the amount of pixel-wise annotations in existing video instance segmentation (VIS) datasets is small, limiting the generalization capability of trained VIS models. An alternative but much cheaper solution is to use bounding boxes to label instances in videos. Inspired by the recent success of box-supervised image instance segmentation, we first adapt the state-of-the-art pixel-supervised VIS models to a box-supervised VIS (BoxVIS) baseline, and observe only slight performance degradation. We consequently propose to improve BoxVIS performance from two aspects. First, we propose a box-center guided spatial-temporal pairwise affinity (STPA) loss to predict instance masks for better spatial and temporal consistency. Second, we collect a larger scale box-annotated VIS dataset (BVISD) by consolidating the videos from current VIS benchmarks and converting images from the COCO dataset to short pseudo video clips. With the proposed BVISD and the STPA loss, our trained BoxVIS model demonstrates promising instance mask prediction performance. Specifically, it achieves 43.2\% and 29.0\% mask AP on the YouTube-VIS 2021 and OVIS valid sets, respectively, exhibiting comparable or even better generalization performance than state-of-the-art pixel-supervised VIS models by using only 16\% annotation time and cost. Codes and data of BoxVIS can be found at \url{https://github.com/MinghanLi/BoxVIS}.
更多
查看译文
关键词
video instance segmentation,boxvis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要