Federated Topic Model and Model Pruning Based on Variational Autoencoder

Lecture notes in electrical engineering(2023)

引用 0|浏览0
暂无评分
摘要
Topic modeling can uncover themes and patterns in large documents. However, when cross-analysis involves multiple parties, data privacy becomes a key issue. Federated learning allows multiple parties to jointly train models while protecting privacy. But there are gains and losses, and in the case of federation, there are communication and performance challenges. In order to solve the above problems, this paper proposes a method to establish a federated topic model while ensuring the privacy of each node, and use neural network model pruning to accelerate the model. In addition, to handle the tradeoff between model training time and inference accuracy, two different methods are proposed to determine the model pruning rate. The first method involves slow pruning throughout the entire model training process, which has limited acceleration effect on the model training process, but can ensure that the pruned model achieves higher accuracy. The second strategy is to quickly reach the target pruning rate in the early stage of model training, and then continue to train the model with a smaller model size. This approach may lose more useful information but can complete the model training faster. Experimental results show that the federated topic model pruning based on the variational autoencoder proposed in this paper can greatly accelerate the model training speed and inference speed while ensuring the model’s performance.
更多
查看译文
关键词
model pruning,topic
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要