Collective Deep Reinforcement Learning for Intelligence Sharing in the Internet of Intelligence-Empowered Edge Computing

IEEE Transactions on Mobile Computing(2023)

引用 4|浏览2
暂无评分
摘要
Edge intelligence is emerging as a new interdiscipline to push learning intelligence from remote centers to the edge of the network. However, with its widespread deployment, new challenges arise in terms of training efficiency and service of quality (QoS). Massive repetitive model training is ubiquitous due to the inevitable needs of users for the same types of data and training results. Additionally, a smaller volume of data samples will cause the over-fitting of models. To address these issues, driven by the Internet of intelligence, this article proposes a distributed edge intelligence sharing scheme, which allows distributed edge nodes to quickly and economically improve learning performance by sharing their learned intelligence. Considering the time-varying edge network states including data collection states, computing and communication states, and node reputation states, the distributed intelligence sharing is formulated as a multi-agent Markov decision process (MDP). Then, a novel collective deep reinforcement learning (CDRL) algorithm is designed to obtain the optimal intelligence sharing policy, which consists of local soft actor-critic (SAC) learning at each edge node and collective learning between different edge nodes. Simulation results indicate our proposal outperforms the benchmark schemes in terms of learning efficiency and intelligence sharing efficiency.
更多
查看译文
关键词
Distributed intelligence sharing, Internet of intelligence, edge computing, collective deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要