Resource Optimized Hierarchical Split Federated Learning for Wireless Networks

CPS-IoT Week '23: Proceedings of Cyber-Physical Systems and Internet of Things Week 2023(2023)

引用 0|浏览17
暂无评分
摘要
Federated learning (FL) uses distributed fashion of training via local models (e.g., convolutional neural network) computation at devices followed by central aggregation at the edge or cloud. Such distributed training uses a significant amount of computational resources (i.e., CPU-cycles/sec) that seem difficult to be met by Internet of Things (IoT) sensors. Addressing these challenges, split FL (SFL) was recently proposed based on computing a part of a model at devices and remaining at edge/cloud servers. Although SFL resolves devices computing resources constraints, it still suffers from fairness issues and slow convergence. To enable FL with these features, we propose a novel hierarchical SFL (HSFL) architecture that combines SFL with a hierarchical fashion of learning. To avoid a single point of failure and fairness issues, HSFL has a truly distributed nature (i.e., distributed aggregations). We also define a cost function that can be minimized relative local accuracy, transmit power, resource allocation, and association. Due to the non-convex nature, we propose a block successive upper bound minimization (BSUM) based solution. Finally, numerical results are presented.
更多
查看译文
关键词
Federated learning, Internet of Things, split learning, hierarchical federated learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要