Efficiently Learning the Metric with Side-Information

ALGORITHMIC LEARNING THEORY, PROCEEDINGS(2003)

引用 42|浏览297
暂无评分
摘要
A crucial problem in machine learning is to choose an appropriate representation of data, in a way that emphasizes the relations we are interested in. In many cases this amounts to finding a suitable metric in the data space. In the supervised case, Linear Discriminant Analysis (LDA) can be used to find an appropriate subspace in which the data structure is apparent. Other ways to learn a suitable metric are found in [6] and [11]. However recently significant attention has been devoted to the problem of learning a metric in the semi-supervised case. In particular the work by Xing et al. [15] has demonstrated how semi-definite programming (SDP) can be used to directly learn a distance measure that satisfies constraints in the form of side-information. They obtain a significant increase in clustering performance with the new representation. The approach is very interesting, however, the computational complexity of the method severely limits its applicability to real machine learning tasks. In this paper we present an alternative solution for dealing with the problem of incorporating side-information. This side-information specifies pairs of examples belonging to the same class. The approach is based on LDA, and is solved by the efficient eigenproblem. The performance reached is very similar, but the complexity is only O(d(3)) instead of O(d(6)) where d is the dimensionality of the data. We also show how our method. can be extended to deal with more general types of side-information.
更多
查看译文
关键词
satisfiability,data structure,machine learning,computational complexity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要