Saliency-based dual-attention network for unsupervised video object segmentation

The Journal of Supercomputing(2024)

引用 0|浏览0
暂无评分
摘要
This paper solves the task of unsupervised video object segmentation (UVOS) that segments the objects of interest through the entire videos without any annotation. In recent years, many unsupervised video object segmentation (UVOS) methods have been proposed. Although these methods perform well, they rely on networks with heavy weights, often leading to large model size. In order to reduce the model size while keeping a competitive performance, we propose a saliency-based dual-attention (SDA) method for UVOS in this paper. In our method, we take optical flow and video frames as inputs and extract the appearance information and motion information from optical flow and video frames. We design a two-branch network with appearance information and motion information. The information from these two branches is fused via a saliency-based dual-attention module to segment the primary object in one path. The saliency-based dual-attention module is composed of saliency attention and saliency-based reverse attention. To demonstrate the effectiveness of our network, we tested it on the DAVIS-2016 and SegtrackV2 datasets. Experimental results demonstrate that our method can achieve competitive results in terms of accuracy and model size.
更多
查看译文
关键词
Saliency-based reverse attention,Unsupervised video object segmentation,Appearance information,Motion information VOS,Interactive VOS
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要