A Load Balancing Algorithm for Equalising Latency Across Fog or Edge Computing Nodes

IEEE Transactions on Services Computing(2023)

引用 0|浏览9
暂无评分
摘要
When dealing with distributed applications in Edge or Fog computing environments, the service latency that the user experiences at a given node can be considered an indicator of how much the node itself is loaded with respect to the others. Indeed, only considering the average CPU time or the RAM utilisation, for example, does not give a clear depiction of the load situation because these parameters are application- and hardware-agnostic. They do not give any information about how the application is performing from the user's perspective, and they cannot be used for a QoS-oriented load balancing. In this article, we propose a load balancing algorithm that is focused on the service latency with the objective of levelling it across all the nodes in a fully decentralised manner. In this way, no user will experience a worse QoS than the other. By providing a differential model of the system and an adaptive heuristic to find the solution to the problem in real settings, we show both in simulation and in a real-world deployment, based on a cluster of Raspberry Pi boards, that our approach is able to level the service latency among a set of heterogeneous nodes organised in different topologies.
更多
查看译文
关键词
Edge computing, fog computing, load balancing, service latency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要