Learning Efficiently with Approximate Inference via Dual Losses

ICML(2010)

引用 89|浏览93
暂无评分
摘要
Many structured prediction tasks involve complex models where inference is computa- tionally intractable, but where it can be well approximated using a linear programming relaxation. Previous approaches for learn- ing for structured prediction (e.g., cutting- plane, subgradient methods, perceptron) re- peatedly make predictions for some of the data points. These approaches are computa- tionally demanding because each prediction involves solving a linear program to optimal- ity. We present a scalable algorithm for learn- ing for structured prediction. The main idea is to instead solve the dual of the structured prediction loss. We formulate the learning task as a convex minimization over both the weights and the dual variables corresponding to each data point. As a result, we can be- gin to optimize the weights even before com- pletely solving any of the individual predic- tion problems. We show how the dual vari- ables can be eciently optimized using co- ordinate descent. Our algorithm is compet- itive with state-of-the-art methods such as stochastic subgradient and cutting-plane.
更多
查看译文
关键词
linear program,subgradient method,cutting plane,linear programming relaxation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要