Extended input space support vector machine.

Neural Networks, IEEE Transactions(2011)

引用 2|浏览0
暂无评分
摘要
In some applications, the probability of error of a given classifier is too high for its practical application, but we are allowed to gather more independent test samples from the same class to reduce the probability of error of the final decision. From the point of view of hypothesis testing, the solution is given by the Neyman-Pearson lemma. However, there is no equivalent result to the Neyman-Pearson lemma when the likelihoods are unknown, and we are given a training dataset. In this brief, we explore two alternatives. First, we combine the soft (probabilistic) outputs of a given classifier to produce a consensus labeling for K test samples. In the second approach, we build a new classifier that directly computes the label for K test samples. For this second approach, we need to define an extended input space training set and incorporate the known symmetries in the classifier. This latter approach gives more accurate results, as it only requires an accurate classification boundary, while the former needs an accurate posterior probability estimate for the whole input space. We illustrate our results with well-known databases.
更多
查看译文
关键词
new classifier,test sample,latter approach,extended input space training,independent test sample,accurate classification boundary,vector machine,accurate posterior probability estimate,training dataset,neyman-pearson lemma,extended input space support,accurate result,support vector machines,posterior probability,probability,support vector machine,machine learning,kernel,error probability,hypothesis testing,databases,probability of error,hypothesis test,kernel machine
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要