A Novel Federated Reinforcement Learning Algorithm with Historical Model Update Momentum

2023 2nd International Conference on Machine Learning, Cloud Computing and Intelligent Mining (MLCCIM)(2023)

引用 0|浏览0
暂无评分
摘要
Reinforcement learning is a crucial approach for addressing intricate problems in online learning, real-time prediction, and control decision-making. However, the limited efficacy of conventional reinforcement learning algorithms remains a significant impediment to their practical application success. Reconciling the disparity between large sample demands and limited sampling efficiency, a federated reinforcement learning framework is introduced to enable information and model sharing among agents while ensuring data privacy and security. Traditional federated reinforcement learning algorithms still face issues such as unstable model training, limited exploration, huge gradient variance of model update, and susceptibility to local optimality. The present study proposes a Momentum-based Federated Reinforcement Learning (MFRL) algorithm. By incorporating the historical model update momentum into model aggregation, MFRL effectively mitigates the issue of excessive gradient variance during model updates, thereby accelerating algorithmic training speed, enhancing training stability in complex environments, and reducing the risk of local optima. The experimental results demonstrate that the MFRL algorithm outperforms the Federated Average (FedAvg) algorithm and baseline Soft Actor-Critic (SAC) algorithm in terms of average score and convergence speed in several classical continuous control tasks.
更多
查看译文
关键词
Federated reinforcement learning,model aggregation,momentum
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要