Exponentiated gradient algorithms for log-linear structured prediction

ICML(2007)

引用 62|浏览1
暂无评分
摘要
Conditional log-linear models are a commonly used method for structured prediction. Efficient learning of parameters in these models is therefore an important problem. This paper describes an exponentiated gradient (EG) algorithm for training such models. EG is applied to the convex dual of the maximum likelihood objective; this results in both sequential and parallel update algorithms, where in the sequential algorithm parameters are updated in an online fashion. We provide a convergence proof for both algorithms. Our analysis also simplifies previous results on EG for max-margin models, and leads to a tighter bound on convergence rates. Experiments on a large-scale parsing task show that the proposed algorithm converges much faster than conjugate-gradient and L-BFGS approaches both in terms of optimization objective and test error.
更多
查看译文
关键词
parallel update algorithm,sequential algorithm parameter,conditional log-linear model,exponentiated gradient algorithm,exponentiated gradient,optimization objective,convergence proof,log-linear structured prediction,convergence rate,maximum likelihood objective,proposed algorithm,efficient learning,log linear model,conjugate gradient,maximum likelihood,marginal models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要