A Cross-domain Few-shot Visual Object Tracker Based on Bidirectional Adversary Generation

IEEE Sensors Journal(2024)

引用 0|浏览1
暂无评分
摘要
Current deep-learning frameworks for object tracking based on transformers usually combine convolutional networks for feature learning. From the perspective of model performance, such network composition usually relies on a large number of labeled training samples, and the insufficient amount of data directly limits feature learning, represented by underfitting and instability of tracking results under specific data. A cross-domain few-shot object tracking framework based on a bidirectional adversary generation strategy is proposed. The object tracking model proposed in this paper achieves few-shot tracking tasks in complex scenes by acquiring empirical knowledge from features learned from cross-sample domains. Specifically, we construct a multi-task encoder for parallel domain adaptation and few-shot feature learning using the adversarial mechanism to achieve bidirectional adaptive feature distribution alignment, and then obtain the prediction information of the target. The algorithm obtains advanced results on OTB-100 and GOT-10K datasets and multiple experiments show that the algorithm achieves advanced (SOTA) performance.
更多
查看译文
关键词
Object tracking,few-shot learning,domain adaptation,Transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要