Communication-Efficient Federated Learning: A Second Order Newton-Type Method With Analog Over-the-Air Aggregation

IEEE Transactions on Green Communications and Networking(2022)

引用 6|浏览17
暂无评分
摘要
Owing to their fast convergence, second-order Newton-type learning methods have recently received attention in the federated learning (FL) setting. However, current solutions are based on communicating the Hessian matrices from the devices to the parameter server, at every iteration, incurring a large number of communication rounds; calling for novel communication-efficient Newton-type learning methods. In this article, we propose a novel second-order Newton-type method that, similarly to its first-order counterpart, requires every device to share only a model-sized vector at each iteration while hiding the gradient and Hessian information. In doing so, the proposed approach is significantly more communication-efficient and privacy-preserving. Furthermore, by leveraging the over-the-air aggregation principle, our method inherits privacy guarantees and obtains much higher communication efficiency gains. In particular, we formulate the problem of learning the inverse Hessian-gradient product as a quadratic problem that is solved in a distributed way. The framework alternates between updating the inverse Hessian-gradient product using a few alternating direction method of multipliers (ADMM) steps, and updating the global model using Newton’s method. Numerical results show that our proposed approach is more communication-efficient and scalable under noisy channels for different scenarios and across multiple datasets.
更多
查看译文
关键词
Distributed optimization,communication-efficient federated learning,second-order methods,analog-over-the-air aggregation,ADMM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要