Differentially Private Training of Mixture of Experts Models
CoRR(2024)
摘要
This position paper investigates the integration of Differential Privacy (DP)
in the training of Mixture of Experts (MoE) models within the field of natural
language processing. As Large Language Models (LLMs) scale to billions of
parameters, leveraging expansive datasets, they exhibit enhanced linguistic
capabilities and emergent abilities. However, this growth raises significant
computational and privacy concerns. Our study addresses these issues by
exploring the potential of MoE models, known for their computational
efficiency, and the application of DP, a standard for privacy preservation. We
present the first known attempt to train MoE models under the constraints of
DP, addressing the unique challenges posed by their architecture and the
complexities of DP integration. Our initial experimental studies demonstrate
that MoE models can be effectively trained with DP, achieving performance that
is competitive with their non-private counterparts. This initial study aims to
provide valuable insights and ignite further research in the domain of
privacy-preserving MoE models, softly laying the groundwork for prospective
developments in this evolving field.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要