Capacity control in linear classifiers for pattern recognition

The Hague(1992)

引用 31|浏览13
暂无评分
摘要
Achieving good performance in statistical pattern recognition requires matching the capacity of the classifier to the amount of training data. If the classifier has too many adjustable parameters (large capacity), it is likely to learn the training data without difficulty, but will probably not generalize properly to patterns that do not belong to the training set. Conversely, if the capacity of the classifier is not large enough, it might not be able to learn the task at all. In between, there is an optimal classifier capacity which ensures the best expected generalization for a given amount of training data. The method of structural risk minimization (SRM) refers to tuning the capacity of the classifier to the available amount of training data. This paper illustrates the method of SRM with several examples of algorithms. Experiments confirm theoretical predictions of performance improvement in application to handwritten digit recognition
更多
查看译文
关键词
character recognition,learning (artificial intelligence),srm,handwritten digit recognition,linear classifiers,optimal classifier capacity,pattern recognition,structural risk minimization,training data,training set,tuning,learning artificial intelligence
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要