Resource-aware Deployment of Dynamic DNNs over Multi-tiered Interconnected Systems
arxiv(2024)
摘要
The increasing pervasiveness of intelligent mobile applications requires to
exploit the full range of resources offered by the mobile-edge-cloud network
for the execution of inference tasks. However, due to the heterogeneity of such
multi-tiered networks, it is essential to make the applications' demand
amenable to the available resources while minimizing energy consumption. Modern
dynamic deep neural networks (DNN) achieve this goal by designing
multi-branched architectures where early exits enable sample-based adaptation
of the model depth. In this paper, we tackle the problem of allocating sections
of DNNs with early exits to the nodes of the mobile-edge-cloud system. By
envisioning a 3-stage graph-modeling approach, we represent the possible
options for splitting the DNN and deploying the DNN blocks on the multi-tiered
network, embedding both the system constraints and the application requirements
in a convenient and efficient way. Our framework – named Feasible Inference
Graph (FIN) – can identify the solution that minimizes the overall inference
energy consumption while enabling distributed inference over the multi-tiered
network with the target quality and latency. Our results, obtained for DNNs
with different levels of complexity, show that FIN matches the optimum and
yields over 65
cost minimization.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要