Energy-Efficient Wireless Federated Learning via Doubly Adaptive Quantization
CoRR(2024)
摘要
Federated learning (FL) has been recognized as a viable distributed learning
paradigm for training a machine learning model across distributed clients
without uploading raw data. However, FL in wireless networks still faces two
major challenges, i.e., large communication overhead and high energy
consumption, which are exacerbated by client heterogeneity in dataset sizes and
wireless channels. While model quantization is effective for energy reduction,
existing works ignore adapting quantization to heterogeneous clients and FL
convergence. To address these challenges, this paper develops an energy
optimization problem of jointly designing quantization levels, scheduling
clients, allocating channels, and controlling computation frequencies (QCCF) in
wireless FL. Specifically, we derive an upper bound identifying the influence
of client scheduling and quantization errors on FL convergence. Under the
longterm convergence constraints and wireless constraints, the problem is
established and transformed into an instantaneous problem with Lyapunov
optimization. Solving Karush-Kuhn-Tucker conditions, our closed-form solution
indicates that the doubly adaptive quantization level rises with the training
process and correlates negatively with dataset sizes. Experiment results
validate our theoretical results, showing that QCCF consumes less energy with
faster convergence compared with state-of-the-art baselines.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要