Posit Arithmetic for the Training and Deployment of Generative Adversarial Networks.

DATE(2021)

引用 9|浏览18
暂无评分
摘要
This paper proposes a set of methods that enables low precision posit (TM) arithmetic to be successfully used for the training of generative adversarial networks (GANs) with minimal quality loss. We show that ultra low precision posits, as small as 6 bits, can achieve high quality output for the generation phase after training. We also evaluate floating-point (float) formats and compare them to 8-bit posits in the context of GAN training. Our scaling and adaptive calibration techniques are capable of producing superior training quality for 8-bit posits that surpasses 8-bit floats and matches the results of 16-bit floats. Hardware simulation results indicate that our methods have higher energy efficiency compared to both 16- and 8-bit float training systems.
更多
查看译文
关键词
Posit Arithmetic,GAN,Neural Networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要