FlexBlock: A Flexible DNN Training Accelerator With Multi-Mode Block Floating Point Support

IEEE Transactions on Computers(2023)

引用 1|浏览10
暂无评分
摘要
When training deep neural networks (DNNs), expensive floating point arithmetic units are used in GPUs or custom neural processing units (NPUs). To reduce the burden of floating point arithmetic, community has started exploring the use of more efficient data representations, e.g., block floating point (BFP). The BFP format allows a group of values to share an exponent, which effectively reduces the memory footprint and enables cheaper fixed point arithmetic for multiply-accumulate (MAC) operations. However, existing BFP-based DNN accelerators are targeted for a specific precision, making them less versatile. In this paper, we present FlexBlock, a DNN training accelerator with three BFP modes, possibly different among activation, weight, and gradient tensors. By configuring FlexBlock to a lower BFP precision, the number of MACs handled by the core increases by up to 4x in 8-bit mode or 16x in 4-bit mode compared to 16-bit mode. To reach this theoretical upper bound, FlexBlock maximizes the core utilization at various precision levels or layer types, and allows dynamic precision control to keep throughput at its peak without sacrificing training accuracy. We evaluate the effectiveness of FlexBlock using representative DNNs on CIFAR, ImageNet and WMT14 datasets. As a result, training in FlexBlock significantly improves training speed by 1.5 ,, 5.3x and energy efficiency by 2.4 ,, 7.0x compared to other training accelerators.
更多
查看译文
关键词
Training,Tensors,Hardware,Arithmetic,Parallel processing,Deep learning,Scalability,Block floating point,DNN training accelerator,low precision training,precision scalability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要