Reinforcement Learning-Based Resource Allocation for Coverage Continuity in High Dynamic UAV Communication Networks.

IEEE Trans. Wirel. Commun.(2024)

引用 0|浏览14
暂无评分
摘要
Unmanned aerial vehicles mounted aerial base stations (ABSs) are capable of providing on-demand coverage in next-generation mobile communication system. However, resource allocation for ABSs to provide continuous coverage is challenging, since the high dynamic of ABSs and time-varying air-to-ground channel would result in channel state information (CSI) mismatch between resource allocation decision and implementation. In consequence, the coverage of ABSs is discontinuous in spatial-temporal dimensions, i.e., the variance of user rate between adjacent time slots is large. To ensure the coverage continuity, we design a resource allocation method based on deep reinforcement learning (RDRL). Capable of adaptively tuning neural network structures, RDRL could satisfy coverage requirements by jointly allocating subchannels and power for ground users. Meanwhile, the temporal channel correlation is taken into account in the design of reward function in RDRL, which aims to alleviate the influence of CSI mismatch between method decision and implementation. Moreover, RDRL can apply a pre-trained model of previous coverage requirement to current requirement to reduce computation complexity. Experimental results show that the rate variance of RDRL can be reduced by 66.7% and spectral efficiency of RDRL can be increased by 34.7% compared with benchmark algorithms, which ensures the coverage continuity.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要