Entropy Regularized LPBoost

ALGORITHMIC LEARNING THEORY, PROCEEDINGS(2008)

引用 76|浏览0
暂无评分
摘要
In this paper we discuss boosting algorithms that maximize the soft margin of the produced linear combination of base hypotheses. LPBoost is the most straightforward boosting algorithm for doing this. It maximizes the soft margin by solving a linear programming problem. While it performs well on natural data, there are cases where the number of iterations is linear in the number of examples instead of logarithmic. By simply adding a relative entropy regularization to the linear objective of LPBoost, we arrive at the Entropy Regularized LPBoost algorithm for which we prove a logarithmic iteration bound. A previous algorithm, called SoftBoost, has the same iteration bound, but the generalization error of this algorithm often decreases slowly in early iterations. Entropy Regularized LPBoost does not suffer from this problem and has a simpler, more natural motivation.
更多
查看译文
关键词
linear combination,early iteration,natural data,entropy regularized lpboost,linear programming problem,soft margin,logarithmic iteration,previous algorithm,entropy regularized lpboost algorithm,linear objective
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要