An Offline-Transfer-Online Framework for Cloud-Edge Collaborative Distributed Reinforcement Learning.

IEEE Trans. Parallel Distributed Syst.(2024)

引用 0|浏览7
暂无评分
摘要
Recent advances in deep reinforcement learning (DRL) have made it possible to train various powerful agents to perform complex tasks in real-time environments. With the next-generation communication technologies, making cloud-edge collaborative artificial intelligence service with evolved DRL agents can be a significant scenario. However, agents with different algorithms and architectures in the same DRL scenario may not be compatible, and training them is either time-consuming or resource-demanding. In this paper, we design a novel cloud-edge collaborative DRL training framework, named Offline-Transfer-Online, which is a new approach that can speed up the convergence of online DRL agents at the edge by interacting with offline agents in the cloud, with the minimum data interchanged and without relying on high-quality offline datasets. Therein, we propose a novel algorithm-independent knowledge distillation algorithm for online RL agents, by leveraging pre-trained models and the interface between agents and the environment to transfer distilled knowledge among multiple heterogeneous agents efficiently. Extensive experiments show that our algorithm can accelerate the convergence of various online agents in a double to decuple speed, with comparable reward achieved in different environments.
更多
查看译文
关键词
Distributed training,Offline-Transfer-Online,deep reinforcement learning,cloud-edge collaborative networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要