Gradient Boosting for Conditional Random Fields
International Conference on Artificial Intelligence and Statistics (AISTATS)(2015)
摘要
In this paper, we present a gradient boosting algorithm for tree-shaped conditional random fields (CRF). Conditional random fields are an important class of models for accurate structured prediction, but effective design of the feature functions is a major challenge when applying CRF models to real world data. Gradient boosting, which can induce and select functions, is a natural candidate solution for the problem. However, it is non-trivial to derive gradient boosting algorithms for CRFs, due to the dense Hessian matrices introduced by variable dependencies. We address this challenge by deriving a Markov Chain mixing rate bound to quantify the dependencies, and introduce a gradient boosting algorithm that iteratively optimizes an adaptive upper bound of the objective function. The resulting algorithm induces and selects features for CRFs via functional space optimization, with provable convergence guarantees. Experimental results on three real world datasets demonstrate that the mixing rate based upper bound is effective for training CRFs with non-linear potentials.Descriptors:* LEARNING MACHINES, ALGORITHMS, OPTIMIZATIONSubject Categories: CyberneticsDistribution Statement: APPROVED FOR PUBLIC RELEASEDEFENSE TECHNICAL INFORMATION CENTER8725 John J. Kingman Road, Fort Belvoir, VA 22060-62181-800-CAL-DTIC (1-800-225-3842)
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络