Resource-Aware Federated Neural Architecture Search over Heterogeneous Mobile Devices

IEEE Transactions on Big Data(2022)

引用 1|浏览29
暂无评分
摘要
Federated learning has been recently proposed for many clients to collaboratively train a machine learning model in a privacy-preserving manner. However, it also amplifies the difficulty in designing good neural network architecture, especially considering heterogeneous mobile devices. To this end, we propose a novel neural architecture search algorithm, namely FedNAS, which can automatically generate a set of optimal models under federated settings. The main idea is to decouple the two primary steps of the NAS process, i.e, model search and model training, and separately distribute them on the cloud and devices. It also tackles the primary challenge of limited on-device computational and communication resources through its novel designs: FedNAS fully exploits the key opportunity of insufficient model candidate re-training during the architecture search process and incorporates three key optimizations: parallel candidate training on partial clients, early dropping candidates with inferior performance, and dynamic round numbers. Evaluated on typical CNN architectures and large-scale datasets, FedNAS is able to achieve comparable model accuracy as a state-of-the-art NAS algorithm that trains models with centralized data, and also reduces the client cost by up to 200_ or more compared to a straightforward design of federated NAS.
更多
查看译文
关键词
Federated learning,neural architecture search,heterogeneous devices,resource-constrained
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要