Prediction with Expert Advice by Following the Perturbed Leader for General Weights

ALGORITHMIC LEARNING THEORY, PROCEEDINGS(2004)

引用 39|浏览265
暂无评分
摘要
When applying aggregating strategies to Prediction with Expert Advice, the learning rate must be adaptively tuned. The natural choice of rootcomplexity/current loss renders the analysis of Weighted Majority derivatives quite complicated. In particular, for arbitrary weights there have been no results proven so far. The analysis of the alternative "Follow the Perturbed Leader" (FPL) algorithm from [KV03] (based on Hannan's algorithm) is easier. We derive loss bounds for adaptive learning rate and both finite expert classes with uniform weights and countable expert classes with arbitrary weights. For the former setup, our loss bounds match the best known results so far, while for the latter our results are new.
更多
查看译文
关键词
polynomials,computer sciences,theorem proving,set theory,expert systems,random variables,forecasting,decision theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要