Federated Learning (FL) has achieved state-of-the-art performance i"/>

A Cooperative Analysis to Incentivize Communication-Efficient Federated Learning

IEEE Transactions on Mobile Computing(2024)

引用 0|浏览3
Federated Learning (FL) has achieved state-of-the-art performance in training a global model in a decentralized and privacy-preserving manner. Many recent works have demonstrated that incentive mechanism is of paramount importance for the success of FL. Existing incentives to FL either neglect communication efficiency, or consider communication efficiency but design the incentive mechanisms using non-cooperative games under complete information assumption, or study incentive mechanism under incomplete information but only apply to the sequential interaction setting. We shed light on this problem from the cooperative perspective and propose an incentive mechanism for communication-efficient FL based on the Nash bargaining theory. Specially, we formulate our incentive mechanism as a one-to-many concurrent bargaining game among the aggregator and clients, and systematically analyze the Nash bargaining solution (NBS, game equilibrium) to design the incentive mechanism. It should be noted that the existing sequential bargaining is not suitable for incentivizing FL due to high (exponential) time complexity, which deteriorates the straggler problem in FL. Our formulated bargaining game is challenging due to the NP-hardness. We propose a probabilistic greedy-based client selection algorithm and derive an analytical payment solution as an approximate NBS. We prove the convergence guarantee of our incentive mechanism for communication-efficient FL. Finally, we conduct experiments over real-world datasets to evaluate the performance of our incentive mechanism.
Federated Learning,Incentive Mechanism,Bargaining,Communication Efficiency
AI 理解论文
Chat Paper