WeChat Mini Program
Old Version Features

Game-Theoretic Incentive Mechanism for Blockchain-Based Federated Learning

Wenzheng Tang,Erwu Liu, Wei Ni, Xinyu Qu,Butian Huang, Kezhi Li,Dusit Niyato,Abbas Jamalipour

IEEE Transactions on Mobile Computing (TMC)(2025)CCF ASCI 2区

College of Electronics and Information Engineering

Cited 0|Views3
Abstract
Blockchain-based federated learning (BFL) has gained attention for its potential to establish decentralized trust. While existing research primarily focuses on personalized frameworks for various applications, essential aspects including incentive mechanisms—critical for ensuring stable system operation—remain under-explored. To bridge this gap, we propose a game-theoretic incentive mechanism designed to foster active participation in BFL tasks. Specifically, we model a BFL system comprising a model owner (MO), i.e., task publisher, multiple miners, and training terminals, framing their interactions through two-tier Stackelberg games. In the first-tier game, the MO designs reward strategies to incentivize training terminals to contribute more data, enhancing model accuracy. The second-tier game introduces a multi-leader multi-follower Stackelberg game, enabling miners to set model packaging prices based on competitors' strategies and anticipated user behavior. By deriving the Stackelberg equilibrium, we identify optimal strategies for all participants, leading to an incentive mechanism balancing individual interests with overall performance. Compared to its benchmarks, our incentive mechanism offers 5.8% and 53.4% higher utilities in the two games compared to its alternatives, accelerating convergence and improving accuracy.
More
Translated text
Key words
Federated learning,blockchain,Stackelberg game,incentive mechanism,multi-leader multi-follower game
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种基于博弈论的激励机制,用于区块链联邦学习,通过平衡个体利益与整体性能,有效提升参与者的数据贡献和模型准确性。

方法】:研究采用两层Stackelberg博弈模型,第一层博弈中模型所有者设计奖励策略以激励训练终端贡献更多数据,第二层博弈中多个矿工根据竞争对手策略和用户行为设定模型打包价格。

实验】:实验通过模拟区块链联邦学习系统,包含模型所有者、多个矿工和训练终端,使用自定义的数据集进行验证,结果显示提出的激励机制在两个游戏中分别提供了5.8%和53.4%的效用提升,加快了收敛速度并提高了模型准确性。