Learning Tracking Representations from Single Point Annotations
arxiv(2024)
摘要
Existing deep trackers are typically trained with largescale video frames
with annotated bounding boxes. However, these bounding boxes are expensive and
time-consuming to annotate, in particular for large scale datasets. In this
paper, we propose to learn tracking representations from single point
annotations (i.e., 4.5x faster to annotate than the traditional bounding box)
in a weakly supervised manner. Specifically, we propose a soft contrastive
learning (SoCL) framework that incorporates target objectness prior into
end-to-end contrastive learning. Our SoCL consists of adaptive positive and
negative sample generation, which is memory-efficient and effective for
learning tracking representations. We apply the learned representation of SoCL
to visual tracking and show that our method can 1) achieve better performance
than the fully supervised baseline trained with box annotations under the same
annotation time cost; 2) achieve comparable performance of the fully supervised
baseline by using the same number of training frames and meanwhile reducing
annotation time cost by 78
noise.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要