Reinforcement-Learning-Based Task Offloading in Edge Computing Systems with Firm Deadlines.

Global Communications Conference(2023)

引用 0|浏览0
暂无评分
摘要
Task offloading in mobile edge computing systems is subject to various random factors including the connection to external servers, new task requests from users, and the availability of local processing services. However, statistical information is often not available in practical scenarios. To tackle the issue, we adopt a Q-learning-based approach that learns the optimal task offloading policy through observations of random events. Traditional Q-learning methods may face challenges such as long training times and high memory usage due to the large state and action space. To overcome this problem, we propose a novel method that leverages the concept of adjacent state sequence. In this type of sequence, we can infer the optimal offloading decision of a system state from other states. This method aims to improve the convergence speed and memory efficiency of the learning model by reducing the number of parameters that need to be learned and stored. Those eliminated parameters instead can be computed via a derived linear expression. We conduct experiments to demonstrate the enhancement of our proposed method compared to the traditional $\mathbf{Q}-$ learning in the studied problem.
更多
查看译文
关键词
Mobile edge computing,task offloading,reinforcement learning,dynamic programming
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要