Hierarchical Deep Reinforcement Learning for Joint Service Caching and Computation Offloading in Mobile Edge-Cloud Computing

IEEE Transactions on Services Computing(2024)

引用 0|浏览18
暂无评分
摘要
Mobile edge-cloud computing networks can provide distributed, hierarchical, and fine-grained resources, and have become a major goal for future high-performance computing networks. The key is how to jointly optimize service caching and computation offloading. However, the joint service caching and computation offloading problem faces three significant challenges of dynamic tasks, heterogeneous resources, and coupled decisions. In this paper, we investigate the issue of joint service caching and computation offloading in mobile edge-cloud computing networks. Specifically, we formulate the optimization problem as minimizing the long-term average service latency, which is NP-hard. To solve the problem, we conduct in-depth theoretical analyses and decompose it into two sub-problems: service caching processing and computation offloading processing. We are the first to propose a novel hierarchical deep reinforcement learning algorithm to solve the formulated problem, where multiple edge agents and a cloud agent collaboratively determine the caching-action and offloading-action, respectively. The results obtained through trace-driven simulations reveal that the proposed framework outperforms several prevailing algorithms concerning the average service latency across diverse scenarios. In a complex real scenario, our framework achieves an approximately 33% convergence improvement and a remarkable 39% reduction in the average service latency when compared to reinforcement learning-based algorithms.
更多
查看译文
关键词
Mobile edge-cloud computing,service caching,computation offloading,hierarchical deep reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要