Deep Reinforcement Learning for QoS-Aware Package Caching in Serverless Edge Computing

2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM)(2021)

引用 3|浏览11
暂无评分
摘要
In serverless-enabled edge computing, container startup delay is one of the most critical issues because it violates some quality-of-service (QoS) requirements such as the ultra-low latency response times. Caching critical packages for the container can mitigate the startup delay associated with container instantiation. However, caches consume the memory resource that is highly limited at edge nodes. It means that the package cache must be carefully managed in serverless-enabled edge computing. This paper proposes a deep reinforcement learning (DRL)-based caching algorithm, which efficiently caches critical and popular packages with per-function response time QoS in hierarchical edge clouds. By conducting multi-agent reinforcement learning (MARL) for the caching agents of on-premise edge nodes in conjunction with a global reward that considers both cache hit and QoS violation numbers, the caching agents can be driven to cooperate with each other. The results of simulation demonstrate that the proposed DRL-based caching policy can improve QoS awareness more effectively than baselines. Compared with the LRU and LFU, the rate of violation fell by 18 and 27 percent, respectively.
更多
查看译文
关键词
Serverless, caching, MARL, DRL
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要