MoDNN: Local distributed mobile computing system for Deep Neural Network.

DATE(2017)

引用 248|浏览120
暂无评分
摘要
Although Deep Neural Networks (DNN) are ubiquitously utilized in many applications, it is generally difficult to deploy DNNs on resource-constrained devices, e.g., mobile platforms. Some existing attempts mainly focus on client-server computing paradigm or DNN model compression, which require either infrastructure supports or special training phases, respectively. In this work, we propose MoDNN - a local distributed mobile computing system for DNN applications. MoDNN can partition already trained DNN models onto several mobile devices to accelerate DNN computations by alleviating device-level computing cost and memory usage. Two model partition schemes are also designed to minimize non-parallel data delivery time, including both wakeup time and transmission time. Experimental results show that when the number of worker nodes increases from 2 to 4, MoDNN can accelerate the DNN computation by 2.17--4.28×. Besides the parallel execution, the performance speedup also partially comes from the reduction of the data delivery time, e.g., 30.02% w.r.t. conventional 2D-grids partition.
更多
查看译文
关键词
MoDNN,local distributed mobile computing system for deep neural network,resource-constrained devices,mobile platforms,client-server computing paradigm,DNN model compression,local distributed mobile computing system,mobile devices,DNN computations,device-level computing cost,memory usage,nonparallel data delivery time minimization,2D-grid partition,parallel execution,transmission time,wakeup time
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要