Aggregating Attentional Dilated Features for Salient Object Detection
IEEE Transactions on Circuits and Systems for Video Technology(2020)
摘要
This paper presents a novel deep learning model to aggregate the attentional dilated features for salient object detection by exploring the complementary information between the global and local context in a convolutional neural network. There are two technical contributions to our network design. First, we develop an
attentional dense atrous (dilated) spatial pyramid pooling
(AD-ASPP) module to selectively use the local saliency cues captured by dilated convolutions with a small rate and the global saliency cues captured by dilated convolutions with a large rate. Second, taking the feature pyramid network as the backbone, we develop an aggregation network to integrate the refined features by formulating two consecutive chains of residual learning based modules: one chain from deep to shallow layers while another chain from shallow to deep layers. We evaluate our network on seven widely-used saliency detection benchmarks by comparing it against 21 state-of-the-art methods. Experimental results show that our network outperforms others on all the seven benchmark datasets.
更多查看译文
关键词
Feature extraction,Saliency detection,Object detection,Aggregates,Task analysis,Visualization,Benchmark testing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络