A light-weight edge-enabled knowledge distillation technique for next location prediction of multitude transportation means

FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE(2024)

引用 0|浏览13
暂无评分
摘要
In this article we study how we can transfer knowledge between mobility models that represent different locations and means of transport. Specifically, we propose the use of knowledge distillation and fine-tuning techniques in order to build accurate next location prediction models using a light-weight architecture that can significantly reduce the inference time. Our goal is not to add one more model in the mobility literature. Instead, we believe that it is of paramount importance to present how we can manage, specialize, and enhance well-trained mobility predictors. In addition to this, we take into consideration the ever-generating mobility data, the limited resources of the devices that run the models and we focus on how we can reduce their computational requirements. We have tried three variations on how we use knowledge distillation, namely distilled agent, double-distilled agent and pre-distilled agent with the latter having an overall improvement of 6.57% in the distance errors compared with a state-of-the-art next location prediction that does not use knowledge distillation and 99.8% reduction in inference time on edge devices with the utilization of light-weight Machine Learning frameworks such as, TensorFlow Lite.
更多
查看译文
关键词
Mobility,Next-location prediction,Deep learning,Transfer knowledge,Knowledge distillation,Fine-tuning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要