Efficient Distributed Machine Learning with Trigger Driven Parallel Training

IEEE Global Communications Conference(2016)

引用 24|浏览36
暂无评分
摘要
Distributed machine learning is becoming increasingly popular for large scale data mining on large scale cluster. To mitigate the interference of straggler machines, recent distributed machine learning systems support flexible model consistency, which allows worker using a local stale model to compute model update without waiting for the newest model, while limiting the asynchronous step in a certain bound to guarantee the algorithm correctness. However, bounded asynchronous computing can not tolerate consistent straggler. We explore that the root cause of this problem derives from the worker driven parallel training mechanism in existing systems. To address the straggler problem fundamentally and fully leverage the asynchronous efficiency, we propose a novel trigger driven parallel training mechanism, where model server proactively triggers to collect updates from workers instead of passively receiving them, which can inherently avoid the coordinating issue among workers. Besides, we devise a dynamic load balancing strategy to make the sampling frequency of each data equal. Furthermore, bounded asynchronous computing is introduced to achieve the algorithm efficiency, as well as the convergence guarantee. Finally, we integrate the above techniques into a distributed machine learning system called Squirrel. Squirrel provides simple programming interface and can easily deploy machine learning algorithms on distributed cluster. In comparison with traditional worker driven parallel training mechanism, trigger driven mechanism can improve up to 4x faster convergence speed of machine learning algorithm.
更多
查看译文
关键词
Distributed machine learning,Straggler problem
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要