SGDAT: An Optimization Method for Binary Neural Networks

Gu Shan, Zhang Guoyin, Jia Chengwei,Wu Yanxia

Neurocomputing(2023)

引用 0|浏览5
暂无评分
摘要
Stochastic gradient descent (SGD), one of the most popular neural network optimization algorithms, has a solid theoretical foundation as well as good generalization performance. However vanilla SGD performs catastrophically in Binary Neural Networks (BNNs). Many studies have identified this phenomenon without explaining its causes in depth. In this paper, we try to experimentally understand the possible reasons for this and significantly improve the performance of vanilla SGD in BNNs training by adjusting the training strategy to be comparable to Adam. We subsequently propose a new optimization method for training deep neural networks (DNNs) with binary weights. In the proposed SGD with Adaptive Threshold, referred to as SGDAT, we suppress the frequency of weights flipping by thresholds and adjust the threshold of each parameter according to the number of flipping to further reduce the network noise, stabilize the network training, and improve the network generalization ability. Also, we present a complete ablation study of the hyperparameters space, as well as experimentally analyze the impact of using adaptive thresholds. Furthermore, we conduct image classification experiments over the CIFAR10, CIFAR100 and TinyImageNet datasets using BinaryNet and ResNet-18 network structure. The experiments show that SGDAT outperforms other binary optimizers. Code is available at: https://github.com/gushan/SGDAT.
更多
查看译文
关键词
Binary neural networks, Optimizers, Deep learning, Convolutional neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要