Floating-Point Formats and Arithmetic for Highly Accurate Multi-Layer Perceptrons

2023 IEEE 23RD INTERNATIONAL CONFERENCE ON NANOTECHNOLOGY, NANO(2023)

引用 0|浏览11
暂无评分
摘要
The data precision can significantly affect the accuracy and overhead metrics of hardware accelerators for different applications such as artificial neural networks (ANNs). This paper evaluates the inference and training of multi-layer perceptrons (MLPs), in which initially IEEE standard floating-point (FP) precisions (half, single and double) are utilized separately and then compared with mixed-precision FP formats. The mixed-precision calculations are investigated for three critical propagation modules (activation functions, weight updates, and accumulation units). Compared with applying a simple low-precision format, the mixed-precision format prevents an accuracy loss and the occurrence of overflow/underflow in the MLPs while potentially incurring in less hardware overhead in terms of area/power. As the multiply-accumulation is the most dominant operation in trending ANNs, a fully pipelined hardware implementation for the fused multiply-add units is proposed for different IEEE FP formats to achieve a very high operating frequency.
更多
查看译文
关键词
accumulation units,accuracy loss,activation functions,artificial neural networks,critical propagation modules,data precision,different IEEE FP formats,floating-point formats,floating-point precisions,fully pipelined hardware implementation,hardware accelerators,hardware overhead,high operating frequency,highly accurate multilayer perceptrons,inference,mixed-precision calculations,mixed-precision format,mixed-precision FP formats,MLPs,multiply-accumulation,overhead metrics,simple low-precision format,weight updates
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要