One-Cycle Pruning: Pruning Convnets With Tight Training Budget.

ICIP(2022)

引用 2|浏览15
暂无评分
摘要
Introducing sparsity in a convnet has been an efficient way to reduce its complexity while keeping its performance almost intact. Most of the time, sparsity is introduced using a three-stage pipeline: 1) training the model to convergence, 2) pruning the model, 3) fine-tuning the pruned model to recover performance. The last two steps are often performed iteratively, leading to reasonable results but also to a time-consuming process. In our work, we propose to remove the first step of the pipeline and to combine the two others in a single training-pruning cycle, allowing the model to jointly learn the optimal weights while being pruned. We do this by introducing a novel pruning schedule, named One-Cycle Pruning (OCP), which starts pruning from the beginning of the training, and until its very end. Experiments conducted on a variety of combinations between architectures (VGG-16, ResNet-18), datasets (CIFAR-10, CIFAR-100, Caltech-101), and sparsity values (80%, 90%, 95%) show that not only OCP consistently outperforms common pruning schedules such as One-Shot, Iterative and Automated Gradual Pruning, but also that it drastically reduces the required training budget. Moreover, experiments following the Lottery Ticket Hypothesis show that OCP allows to find higher quality and more stable pruned networks.
更多
查看译文
关键词
pruning,training,one-cycle
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要