Understanding And Predicting Image Memorability At A Large Scale

2015 IEEE International Conference on Computer Vision (ICCV)(2015)

引用 385|浏览59
暂无评分
摘要
Progress in estimating visual memorability has been limited by the small scale and lack of variety of benchmark data. Here, we introduce a novel experimental procedure to objectively measure human memory, allowing us to build LaMem, the largest annotated image memorability dataset to date (containing 60,000 images from diverse sources). Using Convolutional Neural Networks (CNNs), we show that fine-tuned deep features outperform all other features by a large margin, reaching a rank correlation of 0.64, near human consistency (0.68). Analysis of the responses of the high-level CNN layers shows which objects and regions are positively, and negatively, correlated with memorability, allowing us to create memorability maps for each image and provide a concrete method to perform image memorability manipulation. This work demonstrates that one can now robustly estimate the memorability of images from many different classes, positioning memorability and deep memorability features as prime candidates to estimate the utility of information for cognitive systems. Our model and data are available at: http://memorability.csail.mit.edu
更多
查看译文
关键词
image memorability understanding,image memorability prediction,visual memorability,human memory,LaMem,largest annotated image memorability dataset,convolutional neural networks,rank correlation,human consistency,high-level CNN layers,memorability maps,deep memorability features
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要