STANN - Synthesis Templates for Artificial Neural Network Inference and Training

ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2023, PT I(2023)

引用 0|浏览0
暂无评分
摘要
While Deep Learning accelerators have been a research area of high interest, the focus was usually on monolithic accelerators for the inference of large CNNs. Only recently have accelerators for neural network training started to gain more attention. STANN is a template library that enables quick and efficient FPGA-based implementations of neural networks via high-level synthesis. It supports both inference and training to be applicable to domains such as deep reinforcement learning. Its templates are highly configurable and can be composed in different ways to create different hardware architectures. The evaluation compares different accelerator architectures implemented with STANN to showcase STANN's flexibility. A Xilinx Alveo U50 and a Xilinx Versal ACAP development board are used as the hardware platforms for the evaluation. The results show that the new Versal architecture is very promising for neural network training due to its improved support for floating-point calculations.
更多
查看译文
关键词
Deep Learning,FPGA,Hardware Accelerators
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要