DQN with prioritized experience replay algorithm for reducing network blocking rate in elastic optical networks

Wan-Zhuo Yan,Xiao-Hui Li,Yue-Min Ding,Jie He, Bin Cai

OPTICAL FIBER TECHNOLOGY(2024)

引用 0|浏览2
暂无评分
摘要
The continuous growth of the network communication demand puts higher requirements on the network in-frastructures. The elastic optical network (EON) has great potential to support the continued demands for communication bandwidth. Efficient use of bandwidth resources of EON has been particularly important to alleviate network blocking, which depends on routing, modulation, and spectrum allocation processes (RMSA). However, the time-varying states of EON caused by the uncertainty of future demands make it a lot tougher to realize the online RMSA in real time. To solve the above problem, this paper proposes a kind of Deep Q Network (DQN) algorithm with prioritized experience replay mechanism to perform the RMSA process in real-time. The proposed algorithm includes two parts. One is the Markov Decision Process (MDP) based state transfer for online RMSA by a trained Q-network. The other is an offline DQN-based algorithm for getting a trained Q-network to help the decision-making of RMSA state transfer, where the experience priority replay mechanism and Sumtree are introduced to speed up DQN training. Simulation results show that compared with the traditional Deep Q Network algorithm, the proposed algorithm nearly doubles the Q-network training speed. And compared with the traditional sp + ff algorithm, the trained Q-network reduces the blocking rate by nearly 35 %.
更多
查看译文
关键词
Elastic optical networks,Routing modulation and spectrum allocation process,Markov decision process,Deep Q-network,Experience priority replay mechanism,Sumtree
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要