Deep Learning with Low Precision by Half-wave Gaussian Quantization

arXiv (Cornell University)(2017)

引用 546|浏览140
暂无评分
摘要
The problem of quantizing the activations of a deep neural network is considered. An examination of the popular binary quantization approach shows that this consists of approximating a classical non-linearity, the hyperbolic tangent, by two functions: a piecewise constant sign function, which is used in feedforward network computations, and a piecewise linear hard tanh function, used in the backpropagation step during network learning. The problem of approximating the ReLU non-linearity, widely used in the recent deep learning literature, is then considered. An half-wave Gaussian quantizer (HWGQ) is proposed for forward approximation and shown to have efficient implementation, by exploiting the statistics of of network activations and batch normalization operations commonly used in the literature. To overcome the problem of gradient mismatch, due to the use of different forward and backward approximations, several piece-wise backward approximators are then investigated. The implementation of the resulting quantized network, denoted as HWGQ-Net, is shown to achieve much closer performance to full precision networks, such as AlexNet, ResNet, GoogLeNet and VGG-Net, than previously available low-precision networks, with 1-bit binary weights and 2-bit quantized activations.
更多
查看译文
关键词
forward approximation,network activations,piece-wise backward approximators,low-precision networks,half-wave Gaussian quantization,deep neural network,piecewise constant sign function,feedforward network computations,network learning,half-wave Gaussian quantizer,quantized network,quantized activations,binary quantization approach,ReLU nonlinearity,hyperbolic tangent,piecewise linear hard tanh function,batch normalization operations,HWGQ-Net
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要