A Distributed Computing Framework Based on Variance Reduction Method to Accelerate Training Machine Learning Models

2020 IEEE International Conference on Joint Cloud Computing(2020)

引用 1|浏览28
暂无评分
摘要
To support large-scale intelligent applications, distributed machine learning based on JointCloud is an intuitive solution scheme. However, the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly, which highly demand on computing and memory resources. To overcome the challenges, we propose a computing framework for L-BFGS optimization algorithm based on variance reduction method, which can utilize a fixed big learning rate to linearly accelerate the convergence speed. To validate our claims, we have conducted several experiments on multiple classical datasets. Experimental results show that the computing framework accelerate the training process of solver and obtain accurate results for machine learning algorithms.
更多
查看译文
关键词
machine learning, optimization algorithm, jointcloud, distributed computing, variance reduction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要