Deep Reinforcement Learning for Offloading and Resource Allocation in Vehicle Edge Computing and Networks

IEEE Transactions on Vehicular Technology(2019)

引用 321|浏览84
暂无评分
摘要
Mobile Edge Computing (MEC) is a promising technology to extend the diverse services to the edge of Internet of Things (IoT) system. However, the static edge server deployment may cause “service hole” in IoT networks in which the location and service requests of the User Equipments (UEs) may be dynamically changing. In this paper, we firstly explore a vehicle edge computing network architecture in which the vehicles can act as the mobile edge servers to provide computation services for nearby UEs. Then, we propose as vehicle-assisted offloading scheme for UEs while considering the delay of the computation task. Accordingly, an optimization problem is formulated to maximize the long-term utility of the vehicle edge computing network. Considering the stochastic vehicle traffic, dynamic computation requests and time-varying communication conditions, the problem is further formulated as a semi-Markov process and two reinforcement learning methods: $Q$ -learning based method and deep reinforcement learning (DRL) method, are proposed to obtain the optimal policies of computation offloading and resource allocation. Finally, we analyze the effectiveness of the proposed scheme in the vehicular edge computing network by giving numerical results.
更多
查看译文
关键词
Task analysis,Edge computing,Servers,Internet of Things,Computational modeling,Iron,Reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要