Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection

2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2015)

引用 544|浏览200
暂无评分
摘要
We introduce tools and methodologies to collect high quality, large scale fine-grained computer vision datasets using citizen scientists - crowd annotators who are passionate and knowledgeable about specific domains such as birds or airplanes. We worked with citizen scientists and domain experts to collect NABirds, a new high quality dataset containing 48,562 images of North American birds with 555 categories, part annotations and bounding boxes. We find that citizen scientists are significantly more accurate than Mechanical Turkers at zero cost. We worked with bird experts to measure the quality of popular datasets like CUB-200-2011 and ImageNet and found class label error rates of at least 4%. Nevertheless, we found that learning algorithms are surprisingly robust to annotation errors and this level of training data corruption can lead to an acceptably small increase in test error if the training set has sufficient size. At the same time, we found that an expert-curated high quality test set like NABirds is necessary to accurately measure the performance of fine-grained computer vision systems. We used NABirds to train a publicly available bird recognition service deployed on the web site of the Cornell Lab of Ornithology.
更多
查看译文
关键词
bird recognition app,fine-grained dataset collection,high quality large scale fine-grained computer vision datasets,citizen scientists,crowd annotators,NABirds,North American birds,part annotations,bounding boxes,bird experts,CUB-200-2011 datasets,ImageNet datasets,class label error rates,learning algorithms,annotation errors,data corruption,fine-grained computer vision systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要