DyPrune: Dynamic Pruning Rates for Neural Networks

Richard Adolph Aires Jonker, Roshan Poudel,Olga Fajarda,Jose Luis Oliveira,Rui Pedro Lopes,Sergio Matos

PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT I(2023)

引用 0|浏览1
暂无评分
摘要
Neural networks have achieved remarkable success in various applications such as image classification, speech recognition, and natural language processing. However, the growing size of neural networks poses significant challenges in terms of memory usage, computational cost, and deployment on resource-constrained devices. Pruning is a popular technique to reduce the complexity of neural networks by removing unnecessary connections, neurons, or filters. In this paper, we present novel pruning algorithms that can reduce the number of parameters in neural networks by up to 98% without sacrificing accuracy. This is done by scaling the pruning rate of the models to the size of the model and scheduling the pruning to execute throughout the training of the model. Code related to this work is openly available.
更多
查看译文
关键词
Machine learning,Neural networks,Pruning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要