Afl-gan: adaptive federated learning for generative adversarial network with resource constraints

CCF Transactions on Pervasive Computing and Interaction(2024)

引用 0|浏览0
暂无评分
摘要
Federated Learning and Generative Adversarial Networks (FL-GANs) are becoming increasingly popular in solving practical applications, and their collaboration is even more efficient. However, non-independent and identically distributed (non-IID) training data could make model convergence difficult and training unstable under FL, and the client-drift challenge due to non-IID training data can also adversely affect the training of GAN. To address these challenges, we propose an adaptive FL framework, AFL-GAN, which aims to optimize client selection and local training epochs simultaneously, so as to implement high-performance and stable GANs in practical wireless environments. Specifically, we first give a toy example to explain the necessity of optimizing client selection and local training epochs in FL-GANs, for the two components of the GAN, we set up the training process without exposing the discriminator but sharing the generator to reduce communication overhead. Then, we formulate the minimization problem of AFL-GAN model loss under a given resource budget, and analyze the effect of client selection and local training epoch on the training performance of FL-GANs. Next, guided by the toy example and theoretical analysis, to solve the non-IID and client-drift challenge caused by non-IID, we employ the maximum mean discrepancy (MMD) score to evaluate the contribution weight of each local model, and leverage the deep reinforcement learning (DRL) to adaptively achieve the optimizing of client selection and local training epochs. Finally, experimental results show that our proposed framework can improve the learning performance of FL-GANs training while saving computation and communication resources, and have good performance in resource-constrained situations.
更多
查看译文
关键词
Federated learning,Generative adversarial network,User selection,Deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要