Towards Power Efficient DNN Accelerator Design on Reconfigurable Platform

arXiv (Cornell University)(2021)

引用 0|浏览3
暂无评分
摘要
The exponential emergence of Field Programmable Gate Array (FPGA) has accelerated the research of hardware implementation of Deep Neural Network (DNN). Among all DNN processors, domain specific architectures, such as, Google's Tensor Processor Unit (TPU) have outperformed conventional GPUs. However, implementation of TPUs in reconfigurable hardware should emphasize energy savings to serve the green computing requirement. Voltage scaling, a popular approach towards energy savings, can be a bit critical in FPGA as it may cause timing failure if not done in an appropriate way. In this work, we present an ultra low power FPGA implementation of a TPU for edge applications. We divide the systolic-array of a TPU into different FPGA partitions, where each partition uses different near threshold (NTC) biasing voltages to run its FPGA cores. The biasing voltage for each partition is roughly calculated by the proposed static schemes. However, further calibration of biasing voltage is done by the proposed runtime scheme. Four clustering algorithms based on the minimum slack value of different design paths of Multiply Accumulates (MACs) study the partitioning of FPGA. To overcome the timing failure caused by NTC, the MACs which have higher minimum slack are placed in lower voltage partitions and the MACs have lower minimum slack path are placed in higher voltage partitions. The proposed architecture is simulated in a commercial platform : Vivado with Xilinx Artix-7 FPGA and academic platform VTR with 22nm, 45nm, 130nm FPGAs. The simulation results substantiate the implementation of voltage scaled TPU in FPGAs and also justifies its power efficiency.
更多
查看译文
关键词
reconfigurable
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要