Evaluating Mixed-Precision Arithmetic for 3D Generative Adversarial Networks to Simulate High Energy Physics Detectors

2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA)(2020)

引用 1|浏览0
暂无评分
摘要
Several hardware companies are proposing native Brain Float 16-bit (BF16) support for neural network training. The usage of Mixed Precision (MP) arithmetic with floating-point 32-bit (FP32) and 16-bit half-precision aims at improving memory and floating-point operations throughput, allowing faster training of bigger models. This paper proposes a binary analysis tool enabling the emulation of lower precision numerical formats in Neural Network implementation without the need for hardware support. This tool is used to analyze BF16 usage in the training phase of a 3D Generative Adversarial Network (3DGAN) simulating High Energy Physics detectors. The binary tool allows us to confirm that BF16 can provide results with similar accuracy as the full-precision 3DGAN version and the costly reference numerical simulation using double precision arithmetic.
更多
查看译文
关键词
Reduced Precision,Brain Float 16 (BF16),Mixed Precision (MP),3DGAN,Binary Analysis Tool,High Energy Physics,Generative Adversarial Networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要