Achieving Linear Speedup in Asynchronous Federated Learning with Heterogeneous Clients
CoRR(2024)
Abstract
Federated learning (FL) is an emerging distributed training paradigm that
aims to learn a common global model without exchanging or transferring the data
that are stored locally at different clients. The Federated Averaging
(FedAvg)-based algorithms have gained substantial popularity in FL to reduce
the communication overhead, where each client conducts multiple localized
iterations before communicating with a central server. In this paper, we focus
on FL where the clients have diverse computation and/or communication
capabilities. Under this circumstance, FedAvg can be less efficient since it
requires all clients that participate in the global aggregation in a round to
initiate iterations from the latest global model, and thus the synchronization
among fast clients and straggler clients can severely slow down the overall
training process. To address this issue, we propose an efficient asynchronous
federated learning (AFL) framework called Delayed Federated Averaging
(DeFedAvg). In DeFedAvg, the clients are allowed to perform local training with
different stale global models at their own paces. Theoretical analyses
demonstrate that DeFedAvg achieves asymptotic convergence rates that are on par
with the results of FedAvg for solving nonconvex problems. More importantly,
DeFedAvg is the first AFL algorithm that provably achieves the desirable linear
speedup property, which indicates its high scalability. Additionally, we carry
out extensive numerical experiments using real datasets to validate the
efficiency and scalability of our approach when training deep neural networks.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined