Improving Federated Learning With Quality-Aware User Incentive and Auto-Weighted Model Aggregation

IEEE Transactions on Parallel and Distributed Systems(2022)

引用 25|浏览33
暂无评分
摘要
Federated learning enables distributed model training over various computing nodes, e.g., mobile devices, where instead of sharing raw user data, computing nodes can solely commit model updates without compromising data privacy. The quality of federated learning relies on the model updates contributed by computing nodes training with their local data. However, with various factors (e.g., training data size, mislabeled data samples, skewed data distributions), the model update qualities of computing nodes can vary dramatically, while inclusively aggregating low-quality model updates can deteriorate the global model quality. To achieve efficient federated learning, in this paper, we propose a novel framework named FAIR , i.e., F ederated le A rning with qual I ty awa R eness. Particularly, FAIR integrates three major components: 1) learning quality estimation: we adopt the model aggregation weight (learned in the third component) to reversely quantify the individual learning quality of nodes in a privacy-preserving manner, and leverage the historical learning records to infer the next-round learning quality; 2) quality-aware incentive mechanism: within the recruiting budget, we model a reverse auction problem to stimulate the participation of high-quality and low-cost computing nodes, and the method is proved to be truthful, individually rational, and computationally efficient; and 3) auto-weighted model aggregation: based on the gradient descent method, we devise an auto-weighted model aggregation algorithm to automatically learn the optimal aggregation weights to further enhance the global model quality. Based on real-world datasets and learning tasks, extensive experiments are conducted to demonstrate the efficacy of FAIR .
更多
查看译文
关键词
Edge computing,incentive mechanism,learning quality,mobile computing,model aggregation,federated learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要