Understanding Deep Features with Computer-Generated Imagery

ICCV(2015)

引用 169|浏览99
暂无评分
摘要
We introduce an approach for analyzing the variation of features generated by convolutional neural networks (CNNs) trained on large image datasets with respect to scene factors that occur in natural images. Such factors may include object style, 3D viewpoint, color, and scene lighting configuration. Our approach analyzes CNN feature responses with respect to different scene factors by controlling for them via rendering using a large database of 3D CAD models. The rendered images are presented to a trained CNN and responses for different layers are studied with respect to the input scene factors. We perform a linear decomposition of the responses based on knowledge of the input scene factors and analyze the resulting components. In particular, we quantify their relative importance in the CNN responses and visualize them using principal component analysis. We show qualitative and quantitative results of our study on three trained CNNs: AlexNet [18], Places [43], and Oxford VGG [8]. We observe important differences across the different networks and CNN layers with respect to different scene factors and object categories. Finally, we demonstrate that our analysis based on computer-generated imagery translates to the network representation of natural images.
更多
查看译文
关键词
understanding deep features,computer generated imagery,convolutional neural networks,image datasets,object style,3D viewpoint,scene lighting configuration,CNN feature analysis,3D CAD models,rendered images,input scene factors,linear decomposition,principal component analysis,object categories,network representation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要