A Mathematical Approach Towards Quantization of Floating Point Weights in Low Power Neural Networks

Joydeep Kumar Devnath,Neelam Surana,Joycee Mekie

2020 33rd International Conference on VLSI Design and 2020 19th International Conference on Embedded Systems (VLSID)(2020)

引用 4|浏览1
暂无评分
摘要
Neural networks are both compute and memory intensive, and consume significant power while inferencing. Bit reduction of weights is one of the key techniques used to make them power and area efficient without degrading performance. In this paper, we show that inferencing accuracy changes insignificantly even when floating-point weights are represented using 10-bits (lower for certain other neural networks), instead of 32-bits. We have considered a set of 8 neural networks. Further, we propose a mathematical formula for finding the optimum number of bits required to represent the exponent of floating point weights, below which the accuracy drops drastically. We also show that mantissa is highly dependent on the number of layers of a neural network and propose a mathematical proof for the same. Our simulation results show that bit reduction gives better throughput, power efficiency, and area efficiency as compared to those of the models with full precision weights.
更多
查看译文
关键词
Convolution neural network,energy-efficient neural network,MNIST,CIFAR10,ImageNet,quantization,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要