Channel Attention Based Generative Network For Robust Visual Tracking
2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING(2020)
摘要
In recent years, Siamese trackers have achieved great success in visual tracking. Siamese networks can achieve competitive performance in both accuracy and speed. However, they may suffer from the performance degradation due to the case of large pose variations, out-of-plane, etc. In this paper, we propose a novel real-time Channel Attention based Generative Network (AGSNet) for Robust Visual Tracking. AGSNet can better recognize the targets undergoing significant appearance variations and having similar distractors. The AGSNet model introduces a channel favored feature attention to the template branch to enhance the discriminative capacity and uses a simple generative network in the instance branch to capture a variety of target appearance changes. With the end-to-end offline training, our model can achieve robust visual tracking in a long temporal span.Experimental results on benchmark datasets OTB-2013 and OTB-2015, demonstrate that our proposed tracker outperforms other approaches while runs at more than 40 frames per second.
更多查看译文
关键词
Single-target tracking, Siamese convolutional neural network, Channel attention mechanism, Generative network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络