Saving Energy and Spectrum in Enabling URLLC Services: A Scalable RL Solution

IEEE Transactions on Industrial Informatics(2023)

引用 2|浏览3
暂无评分
摘要
Communication systems supporting cyber-physical production applications should satisfy stringent delay and reliability requirements. Diversity techniques and power control are the main approaches to reduce latency and enhance the reliability of wireless communications at the expense of redundant transmissions and excessive resource usage. Focusing on the application layer reliability key performance indicators (KPIs), we design a deep reinforcement learning orchestrator for power control and hybrid automatic repeat request retransmissions to optimize these KPIs. Furthermore, to address the scalability issue that emerges in the per-device orchestration problem, we develop a new branching soft actor-critic framework, in which a separate branch represents the action space of each industrial device. Our orchestrator enables near-real-time control and can be implemented in the edge cloud. We test our solution with a Third Generation Partnership Project-compliant and realistic simulator for factory automation scenarios. Compared with the state of the art, our solution offers significant scalability gains in terms of computational time and memory requirements. Our extensive experiments show significant improvements in our target KPIs, over the state of the art, especially for fifth percentile user availability. To achieve these targets, our framework requires substantially less total energy or spectrum, thanks to our scalable reinforcement learning solution.
更多
查看译文
关键词
Availability,energy saving,factory automation,reinforcement learning (RL),reliability,soft actor-critic (SAC),ultrareliable low-latency communications (URLLC),5G
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要