DQ Scheduler: Deep Reinforcement Learning Based Controller Synchronization in Distributed SDN

ICC 2019 - 2019 IEEE International Conference on Communications (ICC)(2018)

引用 32|浏览502
暂无评分
摘要
In distributed software-defined networks (SDN), multiple physical SDN controllers, each managing a network domain, are implemented to balance centralized control, scalability and reliability requirements. In such networking paradigm, controllers synchronize with each other to maintain a logically centralized network view. Despite various proposals of distributed SDN controller architectures, most existing works only assume that such logically centralized network view can be achieved with some synchronization designs, but the question of how exactly controllers should synchronize with each other to maximize the benefits of synchronization under the eventual consistency assumptions is largely overlooked. To this end, we formulate the controller synchronization problem as a Markov Decision Process (MDP) and apply reinforcement learning techniques combined with deep neural network to train a smart controller synchronization policy, which we call the Deep-Q (DQ) Scheduler. Evaluation results show that DQ Scheduler outperforms the antientropy algorithm implemented in the ONOS controller by up to 95.2% for inter-domain routing tasks.
更多
查看译文
关键词
DQ Scheduler,distributed software-defined networks,multiple physical SDN controllers,centralized control,logically centralized network view,distributed SDN controller architectures,controller synchronization problem,deep neural network,smart controller synchronization policy,Deep-Q Scheduler,ONOS controller,deep reinforcement learning based controller synchronization,antientropy algorithm,interdomain routing tasks,Markov Decision Process
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要