Parallel sequential minimal optimization for the training of support vector machines

Neural Networks, IEEE Transactions(2006)

引用 213|浏览0
暂无评分
摘要
Sequential minimal optimization (SMO) is one popular algorithm for training support vector machine (SVM), but it still requires a large amount of computation time for solving large size problems. This paper proposes one parallel implementation of SMO for training SVM. The parallel SMO is developed using message passing interface (MPI). Specifically, the parallel SMO first partitions the entire training data set into smaller subsets and then simultaneously runs multiple CPU processors to deal with each of the partitioned data sets. Experiments show that there is great speedup on the adult data set and the Mixing National Institute of Standard and Technology (MNIST) data set when many processors are used. There are also satisfactory results on the Web data set.
更多
查看译文
关键词
parallel algorithms,support vector machines,learning artificial intelligence,quadratic programming,sequential minimal optimization,message passing,nist,message passing interface,support vector machine,kernel,machine learning,training data,parallel algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要