Digital Twin-Assisted Knowledge Distillation Framework for Heterogeneous Federated Learning

China Communications(2023)

引用 3|浏览22
暂无评分
摘要
In this paper, to deal with the heterogeneity in federated learning (FL) systems, a knowledge distil-lation (KD) driven training framework for FL is pro-posed, where each user can select its neural network model on demand and distill knowledge from a big teacher model using its own private dataset. To over-come the challenge of train the big teacher model in resource limited user devices, the digital twin (DT) is exploit in the way that the teacher model can be trained at DT located in the server with enough computing re-sources. Then, during model distillation, each user can update the parameters of its model at either the phys-ical entity or the digital agent. The joint problem of model selection and training offloading and resource allocation for users is formulated as a mixed integer programming (MIP) problem. To solve the problem, Q-learning and optimization are jointly used, where Q-learning selects models for users and determines whether to train locally or on the server, and optimiza-tion is used to allocate resources for users based on the output of Q-learning. Simulation results show the proposed DT-assisted KD framework and joint opti-mization method can significantly improve the average accuracy of users while reducing the total delay.
更多
查看译文
关键词
federated learning,digital twin,knowl-edge distillation,heterogeneity,Q-learning,convex optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要