SAI: Self-Adjusting Incremental Quantile Estimation for Sparse Training of Neural Networks on Hardware Accelerators.

IEEE International Conference on High Performance Computing and Communications(2021)

引用 0|浏览59
暂无评分
摘要
Supporting sparse training of neural networks has become a trend for hardware accelerators. Existing sparse training algorithms generally rely on sorting to compress neural network models. The accelerated sparse training of hardware accelerators is inseparable from the hardware implementation of sorting, which should have the characteristics of one-pass, storage-friendly, and non-dependent hyperparameters. In this paper, we propose a hardware friendly and one-pass Self-Adjusting Incremental Quantile Estimation method, shorten to SAI, to replace the sorting operations commonly used in sparse training algorithms. After applying SAI to sparse training of neural network models, our method reduces the time cost of determining to clip and activate connections to $O(n)$ , and avoids auxiliary storage and frequent I/O operations, which is unattainable by sorting. SAI-based sparse training algorithms show the performance advantages under the condition of smaller sparsity, and can effectively develop parallelism. We design the quantile estimation module of SAI, and test its effect on the image classification combined with sparse training algorithm on CIFAR-10 dataset. Experimental results show that SAI-based sparse training algorithms can achieve high accuracy, and has obvious parallel optimization ability.
更多
查看译文
关键词
hardware accelerator,quantile estimation,sparse neural networks,parallel optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要