Joint Request Offloading and Resource Allocation for Energy Efficient D2D Enabled Multi-type Inference Services.

Parallel and Distributed Processing with Applications(2023)

引用 0|浏览1
暂无评分
摘要
Tremendous research efforts have devoted in providing multi-type inference services for users by deploying heterogeneous models in edge servers (ESs). But, resource contention in ES can degrade the quality-of-service (QoS) of users. To obtain the services with QoS guarantee, users can leverage device-to-device (D2D) collaboration to share the models and resources in mobile devices (MDs) with each other. However, it brings non-trivial energy consumption to MDs, and thus dramatically decreases their battery life. To fill this gap, this paper formulates an energy efficiency optimization problem for D2D enabled multi-type inference services. The goal is to minimize the total energy consumption of MDs, by jointly optimizing the decisions for request offloading and resource allocation under the constraints of resources and QoS. The formulated problem is NP-hard, as it is a mixed integer nonlinear programming problem. To solve the problem, this paper transforms it into an equivalent master request offloading (RO) problem with a low-complexity resource allocation (RA) subproblem. A two-level optimization algorithm problem is proposed for the transformed problem. In the outer level, a low-complexity heuristic algorithm is proposed to iteratively seek the optimal request offloading decision for RO problem. In the inner level, a subgradient based algorithm is proposed to derive the optimal resource allocation decision for RA subproblem. Experimental results show that, the proposed algorithms achieve the best performance, in terms of energy efficiency for all cases, in comparison to the baseline algorithms.
更多
查看译文
关键词
Multi-type inference,energy efficiency,device-to-device,collaboration,request offloading,resource allocation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要