A low power neural network training processor with 8-bit floating point with a shared exponent bias and fused multiply add trees

2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS)(2022)

引用 0|浏览3
暂无评分
摘要
This demonstration showcases a neural network training processor implemented in silicon through 40nm LPCMOS technology. Based on custom 8-bit floating point and efficient tree-based processing schemes and dataflow, we achieve 2.48× higher energy efficiency than a prior low-power neural network training processor.
更多
查看译文
关键词
DNN Training Accelerators,VLSI,Integrated
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要