How hard can it be? Estimating the difficulty of visual search in an image

2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2017)

引用 131|浏览98
暂无评分
摘要
We address the problem of estimating image difficulty defined as the human response time for solving a visual search task. We collect human annotations of image difficulty for the PASCAL VOC 2012 data set through a crowd-sourcing platform. We then analyze what human interpretable image properties can have an impact on visual search difficulty, and how accurate are those properties for predicting difficulty. Next, we build a regression model based on deep features learned with state of the art convolutional neural networks and show better results for predicting the ground-truth visual search difficulty scores produced by human annotators. Our model is able to correctly rank about 75% image pairs according to their difficulty score. We also show that our difficulty predictor generalizes well to new classes not seen during training. Finally, we demonstrate that our predicted difficulty scores are useful for weakly supervised object localization (8% improvement) and semi-supervised object classification (1% improvement).
更多
查看译文
关键词
image visual search,image difficulty,visual search task,human annotations,PASCAL VOC 2012 data set,crowd-sourcing platform,human interpretable image properties,regression model,deep features,convolutional neural networks,visual search difficulty scores,object localization,semisupervised object classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要