Resource-Efficient and Delay-Aware Federated Learning Design under Edge Heterogeneity

2022 IEEE International Conference on Communications Workshops (ICC Workshops)(2022)

引用 1|浏览3
暂无评分
摘要
Federated learning (FL) has emerged as a popular technique for distributing machine learning across wireless edge devices. We examine FL under two salient properties of contemporary networks: device-server communication delays and device computation heterogeneity. Our proposed StoFedDelAv algorithm incorporates a local-global model combiner into the FL synchronization step. We theoretically characterize the convergence behavior of StoFedDelAv and obtain the optimal combiner weights, which consider the global model delay and expected local gradient error at each device. We then formulate a network-aware optimization problem which tunes the minibatch sizes of the devices to jointly minimize energy consumption and machine learning training loss, and solve the non-convex problem through a series of convex approximations. Our simulations reveal that StoFedDelAv outperforms the current art in FL, evidenced by the obtained improvements in optimization objective.
更多
查看译文
关键词
convex approximations,machine learning training loss,energy consumption minimization,nonconvex problem,network-aware optimization problem,local gradient error,global model delay,optimal combiner weights,convergence behavior,FL synchronization step,local-global model combiner,StoFedDelAv algorithm,device computation heterogeneity,device-server communication delays,contemporary networks,salient properties,wireless edge devices,distributed machine learning,edge heterogeneity,delay-aware federated learning design
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要