A Multi-Agent Reinforcement Learning Approach for Dynamic Offloading with Partial Information-Sharing in IoT Networks

2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL(2023)

引用 0|浏览0
暂无评分
摘要
With the widespread adoption of resource-intensive mobile applications, mobile edge computing (MEC) has emerged as a solution to enhance the computational power of mobile user equipments (UEs) and minimize their computational delay by offloading tasks to edge servers (ESs). This paper delves into the computing offloading challenge for multiple UEs in dynamic Internet of Things (IoT) networks with partial information-sharing. In such settings, the transmission bandwidth for each UE varies over time, and they can only access the historical data of their peers. Since UEs are self-interested in offloading computational tasks to ESs that possess limited computational resources, we model the UEs' offloading decision-making in this dynamic, privacy-bound scenario as a game. Subsequently, this game is further formulated as a multi-agent Partially Observable Markov Decision Process (POMDP). To address the POMDP and attain a near-optimal Nash equilibrium (NE) of the structured game, we introduce an algorithm grounded in multi-agent reinforcement learning, integrating Differentiable Neural Computer and Advantage Actor-Critic framework (abbreviated as DNA). Through this method, each UE autonomously decides the optimal computing offloading strategy based on its game history, without obtaining the detailed offloading policies of other UEs. Experimental outcomes reveal that DNA surpasses the state-of-the-art benchmark methods by at least 8.3% in computing offloading utilities and 3.98% in convergence rate, highlighting its effectiveness in a dynamic IoT environment with partial information-sharing between UEs.
更多
查看译文
关键词
Mobile edge computing,distributed computing offloading,Nash equilibrium,partially observable Markov decision process (POMDP),multi-agent reinforcement learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要