Learning Nearest-Neighbor Quantizers from Labeled Data by Information Loss Minimization

AISTATS(2007)

引用 34|浏览15
暂无评分
摘要
This paper proposes a technique for jointly quan- tizing continuous features and the posterior dis- tributions of their class labels based on minimiz- ing empirical information loss, such that the in- dex K of the quantizer region to which a given feature X is assigned approximates a sufficient statistic for its class label Y. We derive an al- ternating minimization procedure for simultane- ously learning codebooks in the Euclidean fea- ture space and in the simplex of posterior class distributions. The resulting quantizer can be used to encode unlabeled points outside the training set and to predict their posterior class distribu- tions, and has an elegant interpretation in terms of universal lossless coding. The promise of our method is demonstrated for the application of learning discriminative visual vocabularies for bag-of-features image classification.
更多
查看译文
关键词
image classification,sufficient statistic,nearest neighbor
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要