YouTube-BoundingBoxes: A Large High-Precision Human-Annotated Data Set for Object Detection in Video

2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2017)

引用 649|浏览246
暂无评分
摘要
We introduce a new large-scale data set of video URLs with densely-sampled object bounding box annotations called YouTube-BoundingBoxes (YT-BB). The data set consists of approximately 380,000 video segments about 19s long, automatically selected to feature objects in natural settings without editing or post-processing, with a recording quality often akin to that of a hand-held cell phone camera. The objects represent a subset of the MS COCO label set. All video segments were human-annotated with high-precision classification labels and bounding boxes at 1 frame per second. The use of a cascade of increasingly precise human annotations ensures a label accuracy above 95% for every class and tight bounding boxes. Finally, we train and evaluate well-known deep network architectures and report baseline figures for per-frame classification and localization to provide a point of comparison for future work. We also demonstrate how the temporal contiguity of video can potentially be used to improve such inferences. Please see the PDF file to find the URL to download the data. We hope the availability of such large curated corpus will spur new advances in video object detection and tracking.
更多
查看译文
关键词
large-scale data,video URLs,densely-sampled object,box annotations,YouTube-BoundingBoxes,feature objects,high-precision classification labels,video object detection,video segments,human annotations,object tracking,time 19.0 s
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要