Training Compact DNNs with ℓ1/2 Regularization

Pattern Recognition(2023)

引用 1|浏览27
暂无评分
摘要
•We propose a network compression model based on ℓ1/2 regularization. To the best of our knowledge, it is the first work utilizing non-Lipschitz continuous regularization to compress DNNs.•We strictly prove the correspondence between ℓp(0更多
查看译文
关键词
Deep neural networks,Model compression,ℓ1/2 Quasi-norm,Non-Lipschitz regularization,Sparse optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要