Time-Sensitive Federated Learning With Heterogeneous Training Intensity: A Deep Reinforcement Learning Approach

IEEE Transactions on Emerging Topics in Computational Intelligence(2024)

引用 0|浏览13
暂无评分
摘要
Federated learning (FL) has recently received sufficient attention because of its capability of collaboratively training machine learning models without exposing data privacy. Most existing FL schemes assume the fixed or predetermined local training intensities/iterations at clients for each communication round, which however neglects the effect of local training intensity determination on the performance of FL. Besides that, in traditional FL, the clients are assigned to conduct the same number of training iterations. In this context, the client with low computation or communication capability may slow down the global model aggregation, which causes high waiting latency at other clients. To address these issues, this paper proposes a novel Time-sensitive FL mechanism with Heterogeneous Training Intensity at clients, named TFL_HTI . Specifically, we first explore the bounded convergence rate of FL with heterogeneous training iterations. Then, we design a Deep Reinforcement Learning (DRL) approach to determine the overall training intensity of clients in each communication round. Based on this, we further design an optimal deterministic algorithm to assign the appropriate local iterations to clients based on their training capabilities. Finally, we conduct simulations to demonstrate the effectiveness of our proposed scheme.
更多
查看译文
关键词
Federated learning,time-sensitive,heterogeneous training,deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要