Adversarial Transfer Networks for Visual Tracking

2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(2018)

引用 0|浏览42
暂无评分
摘要
Visual tracking plays an important role in unmanned systems. In many cases, the system needs to keep track of targets it has never seen before, and the only training sample available is the specified object in the initial frame. In this paper, we propose a deep architecture called adversarial transfer networks (ATNet), which aims to make well use of offline video training data and solve the problem of lacking training samples in visual tracking. Different from most existing trackers which neglect significant differences between videos and gulp the training data all together, our method utilizes the special nature of tracking problem and concentrates on transferring domain-specific information across similar tracking tasks. We first propose an efficient way to select a training video that is most similar to online tracking task and regard it as source domain. With the labeled data in the selected source domain, we apply adversarial transfer learning to make the feature distribution of source-domain samples and target-domain samples as similar as possible. Therefore, the transferred source-domain samples can provide various possible appearance of tracked target for training and boost the tracking performance. Experimental results on three OTB tracking benchmarks show that our method outperforms the state-of-the-art trackers in both accuracy and robustness.
更多
查看译文
关键词
visual tracking,domain-specific information,target-domain samples,adversarial transfer networks,unmanned systems,offline video training data,ATNet,source-domain samples,adversarial transfer learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要