Factorized Convolutional Networks: Unsupervised Fine-Tuning for Image Clustering

2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018)(2018)

引用 6|浏览144
暂无评分
摘要
Deep convolutional neural networks (CNNs) have recognized promise as universal representations for various image recognition tasks. One of their properties is the ability to transfer knowledge from a large annotated source dataset (e.g., ImageNet) to a (typically smaller) target dataset. This is usually accomplished through supervised fine-tuning on labeled new target data. In this work, we address "unsupervised fine-tuning" that transfers a pre-trained network to target tasks with unlabeled data such as image clustering tasks. To this end, we introduce group-sparse non-negative matrix factorization (GSNMF), a variant of NMF, to identify a rich set of high-level latent variables that are informative on the target task. The resulting "factorized convolutional network" (FCN) can itself be seen as a feed-forward model that combines CNN and two-layer structured NMF. We empirically validate our approach and demonstrate state-of-the-art image clustering performance on challenging scene (MIT-67) and fine-grained (Birds-200, Flowers-102) benchmarks. We further show that, when used as unsupervised initialization, our approach improves image classification performance as well.
更多
查看译文
关键词
deep convolutional neural networks,fine-grained benchmarks,group-sparse nonnegative matrix factorization,feed-forward model,two-layer structured NMF,CNNs,GSNMF,image classification performance,unsupervised initialization,high-level latent variables,image clustering tasks,supervised fine-tuning,target dataset,image recognition tasks,universal representations,unsupervised fine-tuning,factorized convolutional networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要