Weight Distillation: Transferring the Knowledge in Neural Network Parameters

Yandan Lin,Yanyang Li,Ziyang Wang,Bei Li, Qingyun Du,Tong Xiao, Jun Zhu

arXiv (Cornell University)(2020)

引用 0|浏览1
暂无评分
摘要
Knowledge distillation has been proven to be effective in model acceleration and compression. It allows a small network to learn to generalize in the same way as a large network. Recent successes in pre-training suggest the effectiveness of transferring model parameters. Inspired by this, we investigate methods of model acceleration and compression in another line of research. We propose Weight Distillation to transfer the knowledge in the large network parameters through a parameter generator. Our experiments on WMT16 En-Ro, NIST12 Zh-En, and WMT14 En-De machine translation tasks show that weight distillation can train a small network that is 1.88~2.94x faster than the large network but with competitive performance. With the same sized small network, weight distillation can outperform knowledge distillation by 0.51~1.82 BLEU points.
更多
查看译文
关键词
distillation,neural network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要