Evaluating Edge-Cloud Computing Trade-Offs for Mobile Object Detection and Classification with Deep Learning.

J. Inf. Data Manag.(2020)

引用 0|浏览4
暂无评分
摘要
Internet-of-Things (IoT) applications based on Artificial Intelligence, such as mobile object detection and recognition from images and videos, may greatly benefit from inferences made by state-of-the-art Deep Neural Network(DNN) models. However, adopting such models in IoT applications poses an important challenge since DNNs usually require lots of computational resources (i.e. memory, disk, CPU/GPU, and power), which may prevent them to run on resource-limited edge devices. On the other hand, moving the heavy computation to the Cloud may significantly increase running costs and latency of IoT applications. Among the possible strategies to tackle this challenge are: (i) DNN model partitioning between edge and cloud; and (ii) running simpler models in the edge and more complex ones in the cloud, with information exchange between models, when needed. Variations of strategy (i) also include: running the entire DNN on the edge device (sometimes not feasible) and running the entire DNN on the cloud. All these strategies involve trade-offs in terms of latency, communication, and financial costs. In this article we investigate such trade-offs in real-world scenarios. We conduct several experiments using deep learning models for image-based object detection and classification. Our setup includes a Raspberry PI 3 B+ and a cloud server equipped with a GPU. Different network bandwidths are also evaluated. Our results provide useful insights about the aforementioned trade-offs. The partitioning experiment showed that, overall, running the inferences entirely on the edge or entirely on the cloud server are the best options. The collaborative approach yielded a significant increase in accuracy without penalizing running costs too much.
更多
查看译文
关键词
mobile object detection,deep learning,edge-cloud,trade-offs
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要