Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood
CoRR(2024)
摘要
Neural network sparsification is a promising avenue to save computational
time and memory costs, especially in an age where many successful AI models are
becoming too large to naïvely deploy on consumer hardware. While much work
has focused on different weight pruning criteria, the overall sparsifiability
of the network, i.e., its capacity to be pruned without quality loss, has often
been overlooked. We present Sparsifiability via the Marginal likelihood (SpaM),
a pruning framework that highlights the effectiveness of using the Bayesian
marginal likelihood in conjunction with sparsity-inducing priors for making
neural networks more sparsifiable. Our approach implements an automatic Occam's
razor that selects the most sparsifiable model that still explains the data
well, both for structured and unstructured sparsification. In addition, we
demonstrate that the pre-computed posterior Hessian approximation used in the
Laplace approximation can be re-used to define a cheap pruning criterion, which
outperforms many existing (more expensive) approaches. We demonstrate the
effectiveness of our framework, especially at high sparsity levels, across a
range of different neural network architectures and datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要