Efficient Federated Learning With Channel Status Awareness and Devices' Personal Touch

IEEE Transactions on Mobile Computing(2024)

Cited 0|Views1
No score
Abstract
Federated learning (FL) is a widely used distributed learning framework. However, constrained wireless environment and intrinsically heterogeneous data across devices can hinder the FL framework being practical. In this paper, we propose a communication-efficient FL framework that helps boost the training process by considering the transmission power of each device and the local models' personalized training. In each round of training, we select the participating devices that can minimize the upper bound of the convergence rate plus the corresponding communication overhead while subjecting to the transmit power constraint. Besides, each device update a personalized and sparse model that only consumes limited computation resources. We validate our proposed FL framework on various dataset, and experiment results show that our framework speeds up the training process by taking $\sim$ 40% less time than the existing frameworks. Also, the communication time can be significantly decreased by employing our framework, e.g., we achieve as high as a 42.7% increase in test accuracy and save up to 74.3 $\%$ in the communication cost compared with FedAvg.
More
Translated text
Key words
Device scheduling,efficient federated learning,personalized training
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined