OpenFedLLM: Training Large Language Models on Decentralized Private Data via Federated Learning
CoRR(2024)
摘要
Trained on massive publicly available data, large language models (LLMs) have
demonstrated tremendous success across various fields. While more data
contributes to better performance, a disconcerting reality is that high-quality
public data will be exhausted in a few years. In this paper, we offer a
potential next step for contemporary LLMs: collaborative and privacy-preserving
LLM training on the underutilized distributed private data via federated
learning (FL), where multiple data owners collaboratively train a shared model
without transmitting raw data. To achieve this, we build a concise, integrated,
and research-friendly framework/codebase, named OpenFedLLM. It covers federated
instruction tuning for enhancing instruction-following capability, federated
value alignment for aligning with human values, and 7 representative FL
algorithms. Besides, OpenFedLLM supports training on diverse domains, where we
cover 8 training datasets; and provides comprehensive evaluations, where we
cover 30+ evaluation metrics. Through extensive experiments, we observe that
all FL algorithms outperform local training on training LLMs, demonstrating a
clear performance improvement across a variety of settings. Notably, in a
financial benchmark, Llama2-7B fine-tuned by applying any FL algorithm can
outperform GPT-4 by a significant margin while the model obtained through
individual training cannot, demonstrating strong motivation for clients to
participate in FL. The code is available at
https://github.com/rui-ye/OpenFedLLM.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要