ADEPNET: A Dynamic-Precision Efficient Posit Multiplier for Neural Networks

Aditya Anirudh Jonnalagadda,Uppugunduru Anil Kumar, Rishi Thotli, Satvik Sardesai,Sreehari Veeramachaneni,Syed Ershad Ahmed

IEEE ACCESS(2024)

引用 0|浏览0
暂无评分
摘要
The posit number system aims to be a drop-in replacement of the existing IEEE floating-point standard. Its properties- tapered precision and high dynamic range, allow a smaller size posit to almost match the performance of a much larger size floating-point in representing decimals. This becomes especially useful for performing error-tolerant tasks like deep learning inference computation where low latency and area are a priority. Recent research has found that the performance of deep neural network models saturates beyond a certain level of accuracy of multipliers used for convolutions. Therefore, the extra hardware cost of developing precise arithmetic circuits for such applications becomes an unnecessary overhead. This paper explores approximate posit multipliers in the convolutional layers of deep neural networks and attempts to find an ideal balance between hardware utilization and inference accuracy. Posit multiplication involves several steps, with the mantissa multiplication step utilizing maximum hardware resources. To mitigate this, a posit multiplier circuit using an approximate hybrid-radix Booth encoding for mantissa multiplication and techniques such as truncation and bit masking based on input regime size are proposed. In addition, a novel Booth encoding control scheme to prevent unnecessary bits from switching has been devised to reduce dynamic power dissipation. Compared to existing literature, these optimizations have contributed to a 23% decrease in power dissipation in the mantissa multiplication stage. Further, a novel area and energy-efficient decoder architecture have also been developed with an 11% reduction in dynamic power dissipation and area compared to existing decoders. Overall, the proposed < 16, 2 > posit multiplier offers a 14% reduction in the PDP over the existing approximate posit multiplier designs. The proposed < 16, 2 > multiplier also achieves over 90% accuracy in inferencing deep learning models such as ResNet20, VGG-19 and DenseNet.
更多
查看译文
关键词
Decoding,Hardware,Energy efficiency,Encoding,Power dissipation,Deep learning,Artificial neural networks,IEEE 754 Standard,Floating-point arithmetic,Approximate posit multipliers,deep neural networks,energy-efficient
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要