A BF16 FMA is All You Need for DNN Training

IEEE Transactions on Emerging Topics in Computing(2022)

引用 6|浏览13
暂无评分
摘要
Fused Multiply-Add (FMA) functional units constitute a fundamental hardware component to train Deep Neural Networks (DNNs). Its silicon area grows quadratically with the mantissa bit count of the computer number format, which has motivated the adoption of the BrainFloat16 format (BF16). BF16 features 1 sign, 8 exponent and 7 explicit mantissa bits. Some approaches to train DNNs achieve significant performance benefits by using the BF16 format. However, these approaches must combine BF16 with the standard IEEE 754 Floating-Point 32-bit (FP32) format to achieve state-of-the-art training accuracy, which limits the impact of adopting BF16. This article proposes the first approach able to train complex DNNs entirely using the BF16 format. We propose a new class of FMA operators, $\mathrm{FMA}^{\mathrm {bf}16}_{\mathrm{n}\_\mathrm{m}}$ , that entirely rely on BF16 FMA hardware instructions and deliver the same accuracy as FP32. $\mathrm{FMA}^{\mathrm {bf}16}_{\mathrm{n}\_\mathrm{m}}$ operators achieve performance improvements within the 1.28-1.35× range on ResNet101 with respect to FP32. $\mathrm{FMA}^{\mathrm {bf}16}_{\mathrm{n}\_\mathrm{m}}$ enables training complex DNNs on simple low-end hardware devices without requiring expensive FP32 FMA functional units.
更多
查看译文
关键词
Neural nets,machine learning,reduced precision,FMA Operators,BF16,FP32,swamping,computer arithmetic,emulation,hardware
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要