When Edge Meets Learning: Adaptive Control For Resource-Constrained Distributed Machine Learning

IEEE INFOCOM 2018 - IEEE Conference on Computer Communications(2018)

引用 516|浏览533
暂无评分
摘要
Emerging technologies and applications including Internet of Things (IoT), social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient descent based approaches. We analyze the convergence rate of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best trade-off between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.
更多
查看译文
关键词
machine learning models,gradient-descent based approaches,distributed gradient descent,resource-constrained distributed machine learning,network edge,edge nodes,data distributions,model parameters learning,loss function minimization,adaptive control,control algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要