Distributed Assignment With Load Balancing for DNN Inference at the Edge

IEEE Internet of Things Journal(2023)

引用 5|浏览22
暂无评分
摘要
Inference carried out on pretrained deep neural networks (DNNs) is particularly effective as it does not require retraining and entails no loss in accuracy. Unfortunately, resource-constrained devices such as those in the Internet of Things may need to offload the related computation to more powerful servers, particularly, at the network edge. However, edge servers have limited resources compared to those in the cloud; therefore, inference offloading generally requires dividing the original DNN into different pieces that are then assigned to multiple edge servers. Related approaches in the state-of-the-art either make strong assumptions on the system model or fail to provide strict performance guarantees. This article specifically addresses these limitations by applying distributed assignment to DNN inference at the edge. In particular, it devises a detailed model of DNN-based inference, suitable for realistic scenarios involving edge computing. Optimal inference offloading with load balancing is also defined as a multiple assignment problem that maximizes proportional fairness. Moreover, a distributed algorithm for DNN inference offloading is introduced to solve such a problem in polynomial time with strong optimality guarantees. Finally, extensive simulations employing different data sets and DNN architectures establish that the proposed solution significantly improves upon the state-of-the-art in terms of inference time (1.14 to 2.62 times faster), load balance (with Jain’s fairness index of 0.9), and convergence (one order of magnitude less iterations).
更多
查看译文
关键词
Assignment problems,distributed inference,deep neural network (DNN) offloading,edge computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要