Soft Weight-Sharing for Neural Network Compression.

ICLR(2017)

引用 446|浏览118
暂无评分
摘要
The success of deep learning in numerous application domains created the desire to run and train them on mobile devices. however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression.Recent work by Han et al. (2016) propose a pipeline that involves retraining, pruning and quantization of neural network weights, obtaining state-of-the-art compression rates.In this paper, we show that competitive compression rates can be achieved by using a version of soft weight-sharing (Nowlan u0026 Hinton, 1991). Our method achieves both quantization and pruning in one simple (re-)training procedure. This point of view also exposes the relation between compression and the minimum description length (MDL) principle.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要