FLOD: Oblivious Defender for Private Byzantine-Robust Federated Learning with Dishonest-Majority

COMPUTER SECURITY - ESORICS 2021, PT I(2021)

引用 10|浏览34
暂无评分
摘要
Privacy and Byzantine-robustness are two major concerns of federated learning (FL), but mitigating both threats simultaneously is highly challenging: privacy-preserving strategies prohibit access to individual model updates to avoid leakage, while Byzantine-robust methods require access for comprehensive mathematical analysis. Besides, most Byzantine-robust methods only work in the honest-majority setting. We present FLOD, a novel oblivious defender for private Byzantinerobust FL in dishonest-majority setting. Basically, we propose a novel Hamming distance-based aggregation method to resist > 1/2 Byzantine attacks using a small root-dataset and server-model for bootstrapping trust. Furthermore, we employ two non-colluding servers and use additive homomorphic encryption (AHE) and secure two-party computation (2PC) primitives to construct efficient privacy-preserving building blocks for secure aggregation, in which we propose two novel in-depth variants of Beaver Multiplication triples (MT) to reduce the overhead of Bit to Arithmetic (Bit2A) conversion and vector weighted sum aggregation (VSWA) significantly. Experiments on real-world and synthetic datasets demonstrate our effectiveness and efficiency: (i) FLOD defeats known Byzantine attacks with a negligible effect on accuracy and convergence, (ii) achieves a reduction of similar to 2x for offline (resp. online) overhead of Bit2A and VSWA compared to ABY-AHE (resp. ABY-MT) based methods (NDSS'15), (iii) and reduces total online communication and run-time by 167-1416x and 3.1-7.4x compared to FLGUARD (Crypto Eprint 2021/025).
更多
查看译文
关键词
Privacy-preserving, Byzantine-robust, Federated, Learning, Dishonest-majority
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要