Minimum Generalization Via Reflection: A Fast Linear Threshold Learner

Machine Learning(1999)

引用 8|浏览0
暂无评分
摘要
The number of adjustments required to learn the average LTU function of d features, each of which can take on n equally spaced values, grows as approximately n 2 d when the standard perceptron training algorithm is used on the complete input space of n points and perfect classification is required. We demonstrate a simple modification that reduces the observed growth rate in the number of adjustments to approximately d 2 (log (d) + log(n)) with most, but not all input presentation orders. A similar speed-up is also produced by applying the simple but computationally expensive heuristic ";don't overgeneralize" to the standard training algorithm. This performance is very close to the theoretical optimum for learning LTU functions by any method, and is evidence that perceptron-like learning algorithms can learn arbitrary LTU functions in polynomial, rather than exponential time under normal training conditions. Similar modifications can be applied to the Winnow algorithm, achieving similar performance improvements and demonstrating the generality of the approach.
更多
查看译文
关键词
LTU,Winnow,Perceptron,Reflection,generalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要