Efficient Parallel Learning Of Hidden Markov Chain Models On Smps

IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS(2010)

引用 2|浏览18
暂无评分
摘要
Quad-core cpus have been a common desktop configuration for today's office. The increasing number of processors on a single Chip opens new opportunity for parallel computing. Our goal is to make use of the multi-core as well as multi-processor architectures to speed up large-scale data mining algorithms. In this paper, we present a general parallel learning framework, Cut-And-Stitch, for training hidden Markov chain models. Particularly, we propose two model-specific variants, CAS-LDS for learning linear dynamical systems (LDS) and CAS-HMM for learning hidden Markov models (HMM). Our main contribution is a novel method to handle the data dependencies due to the chain structure of hidden variables, so as to parallelize the EM-based parameter learning algorithm. We implement CAS-LDS and CAS-HMM using OpenMP on two supercomputers and a quad-core commercial desktop. The experimental results show that parallel algorithms using Cut-And-Stitch achieve comparable accuracy and almost linear speedups over the traditional serial version.
更多
查看译文
关键词
linear dynamical systems, hidden Markov models, OpenMP, expectation maximization (EM), optimization, multi-core
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要