Distributed training for Conditional Random Fields

NLPKE(2010)

引用 4|浏览20
暂无评分
摘要
This paper proposes a novel distributed training method of Conditional Random Fields (CRFs) by utilizing the clusters built from commodity computers. The method employs Message Passing Interface (MPI) to deal with large-scale data in two steps. Firstly, the entire training data is divided into several small pieces, each of which can be handled by one node. Secondly, instead of adopting a root node to collect all features, a new criterion is used to split the whole feature set into non-overlapping subsets and ensure that each node maintains the global information of one feature subset. Experiments are carried out on the task of Chinese word segmentation (WS) with large scale data, and we observed significant reduction on both training time and space, while preserving the performance.
更多
查看译文
关键词
distributed strategy,chinese word segmentation,large-scale data,distributed training method,message passing interface,natural language processing,message passing,conditional random fields,accuracy,conditional random field
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要