BFP for DNN Architectures

Synthesis lectures on engineering, science, and technology(2023)

引用 0|浏览5
暂无评分
摘要
This chapter explores the use of the block floating point number system as an alternative to traditional fixed point and floating point number systems in deep neural network implementations. We begin by providing an overview of BFP and explaining how it differs from FLP and FXP. Next, we discuss the factors that can impact the performance of BFP in DNN acceleration, drawing from existing literature to identify the best practices for optimizing BFP performance. Finally, we present quantitative comparisons of various works that have utilized BFP for DNNs, demonstrating its potential as an effective tool for accelerating DNN computations.
更多
查看译文
关键词
dnn architectures
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要