Deep Reinforcement Learning For Task Offloading In Edge Computing Assisted Power Iot

IEEE ACCESS(2021)

引用 9|浏览1
暂无评分
摘要
Power Internet of Things (PIoT) is a promising solution to meet the increasing electricity demand of modern cities, but real-time processing and analysis of huge data collected by the devices is challengeable due to limited computing capability of devices and long distance from the cloud center. In this paper, we consider the edge computing assisted PIoT where the computing tasks of the devices can be either processed locally by the devices, or offloaded to edge servers. Aiming to maximize the long-term system utility which is defined as a weighted sum of reduction in latency and energy consumption, we propose a novel task offloading algorithm based on deep reinforcement learning, which jointly optimizes task scheduling, transmit power of the PIoT devices, and computing resource allocation of the edge servers. Specifically, the task execution on each edge server is modeled as a queuing system, in which the current queue state may affect the waiting time for the next tasks. The transmit power and computing resource allocation is first optimized, respectively, and then a deep Q-learning algorithm is adopted to make task scheduling decisions. Numerical results show that the proposed method can improve the system utility.
更多
查看译文
关键词
Task analysis, Servers, Resource management, Processor scheduling, Edge computing, Scheduling, Energy consumption, Power Internet of Things, smart grid, edge offloading, resource allocation, deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要