Multi-hypothesis classifier

Sengupta Sayantan, Sanyal Sudip

arxiv(2019)

引用 0|浏览5
暂无评分
摘要
Accuracy is the most important parameter among few others which defines the effectiveness of a machine learning algorithm. Higher accuracy is always desirable. Now, there is a vast number of well established learning algorithms already present in the scientific domain. Each one of them has its own merits and demerits. Merits and demerits are evaluated in terms of accuracy, speed of convergence, complexity of the algorithm, generalization property, and robustness among many others. Also the learning algorithms are data-distribution dependent. Each learning algorithm is suitable for a particular distribution of data. Unfortunately, no dominant classifier exists for all the data distribution, and the data distribution task at hand is usually unknown. Not one classifier can be discriminative well enough if the number of classes are huge. So the underlying problem is that a single classifier is not enough to classify the whole sample space correctly. This thesis is about exploring the different techniques of combining the classifiers so as to obtain the optimal accuracy. Three classifiers are implemented namely plain old nearest neighbor on raw pixels, a structural feature extracted neighbor and Gabor feature extracted nearest neighbor. Five different combination strategies are devised and tested on Tibetan character images and analyzed
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要