Towards Massive Distribution of Intelligence for 6G Network Management using Double Deep Q-Networks

IEEE Transactions on Network and Service Management(2023)

引用 0|浏览0
暂无评分
摘要
In future 6G networks, the deployment of network elements is expected to be highly distributed, going beyond the level of distribution of existing 5G deployments. To fully exploit the benefits of such a distributed architecture, there needs to be a paradigm shift from centralized to distributed management. To enable distributed management, Reinforcement Learning (RL) is a promising choice, due to its ability to learn dynamic changes in environments and to deal with complex problems. However, the deployment of highly distributed RL – termed massive distribution of intelligence – still faces a few unsolved challenges. Existing RL solutions, based on Q-Learning (QL) and Deep Q-Network (DQN) do not scale with the number of agents. Therefore, current limitations, i.e., convergence, system performance and training stability, need to be addressed, to facilitate a practical deployment of massive distribution. To this end, we propose improved Double Deep Q-Network (IDDQN), addressing the long-term stability of the agents’ training behavior. We evaluate the effectiveness of IDDQN for a beyond 5G/6G use case: auto-scaling virtual resources in a network slice. Simulation results show that IDDQN improves the training stability over DQN and converges at least 2 times sooner than QL. In terms of the number of users served by a slice, IDDQN shows good performance and only deviates on average 8% from the optimal solution. Further, IDDQN is robust and resource-efficient after convergence. We argue that IDDQN is a better alternative than QL and DQN, and holds immense potential for efficiently managing 6G networks.
更多
查看译文
关键词
6G,network management,network automation,Reinforcement Learning,Machine Learning,distributed intelligence,model training stability,scalability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要