A Practical Fast Model Inference System Over Tiny Wireless Device

2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC(2023)

引用 0|浏览4
暂无评分
摘要
The utilization of machine learning models has become prevalent in various wireless devices to deduce the network status from various tracing data, e.g., link capacity, channel fading, etc. To cope with the increasing complexity of the network environment, deep neural models are leveraged to mine the high-dimensional network tracing data for a variety of intelligent applications. However, due to the limited resource that allocated to the network stack process, it is infeasible to train deep neural models due to the constrained computing power and absence of large-scale labeled data. Besides, the network device can barely support quick inference of large model, thus cannot support promptly response to the network conditions. In this paper, we propose a practical fast model inference system that can run high accuracy model over tiny wireless devices that are constrained in both memory and CPU power. Specifically, we design a knowledge-distillation based training method for a light-weight model that deployed at device side that can migrate the knowledge from a well-trained deep model. It is shown that our system can support fast model inference over tiny devices, which can greatly improve the network throughput in a multiuser access system by inferring the transmission collision from channel error, and thus can improve the accuracy of the link adaptation. We have conducted practical experiments to verify our system and discuss the possible extensions.
更多
查看译文
关键词
fast inference,machine learning,tiny device,multi-access system
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要