Probabilistic Device Scheduling for Over-the-Air Federated Learning.

International Conference on Communication Technology(2023)

引用 0|浏览2
暂无评分
摘要
Federated learning (FL) is an emerging distributed training scheme where edge devices collaboratively train a model by uploading model updates instead of private data. To address the communication bottleneck, over-the-air (OTA) computation has been introduced to FL, which allows multiple edge devices to upload their gradient updates concurrently for aggregation. However, OTA computation is plagued by the communication error, which is critically affected by the device selection policy and impacts the performance of the output model. In this paper, we propose a probabilistic device selection scheme PO-FL, which effectively enhances the convergence performance of over-the-air FL. Specifically, each device is selected for OTA computation according to the predetermined probability, and its local update is scaled by this probability. By analyzing the convergence of PO-FL, we show that its convergence is determined by the device selection via the communication error and the variance of global update. Then, we propose a device selection algorithm that jointly considers the channel condition and gradient update importance of edge devices to optimize their selection probabilities. The experimental results on the MNIST dataset demonstrate that the proposed algorithm converges faster and learns better models than the baselines.
更多
查看译文
关键词
Federated learning (FL),over-the-air computation (AirComp),device scheduling,channel awareness,gradient importance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络