DNN Acceleration: A High-Accuracy Implementation for Base-2 Softmax Layer

2023 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC)(2023)

引用 0|浏览0
暂无评分
摘要
This paper introduces a high-accuracy implementation of the softmax layer in Deep Neural Networks (DNN) used in multi-category classification applications. Calculation of the exponentials or logarithms in the traditional base-e softmax is mathematically complex, which makes it hard to have a high-accuracy hardware implementation without being resource-consuming, so instead of using $e$ as the exponential base, this paper presents a hardware implementation of the base-2 softmax function that makes use of 2 as the exponential base. Thus, the complex operations in base-e softmax will be replaced by simple shift and addition operations, with simpler LUTs and higher accuracy. The implemented hardware model relies on single-precision floating-point arithmetic cores, it achieves classification accuracy equal to 100 % relative to a reference software model and has an area of 0.0802 mm 2 with a power consumption of 8.93 mW when synthesized under TSMC 28nm CMOS technology at the frequency of 1 GHz.
更多
查看译文
关键词
Base-2 Softmax Function,Convolutional Neural Network (CNN),Deep Neural Network (DNN),Symmetric-Mapping Look-Up-Table (SM-LUT)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要