Deep Neural Network Model And Fpga Accelerator Co-Design: Opportunities And Challenges

2018 14TH IEEE INTERNATIONAL CONFERENCE ON SOLID-STATE AND INTEGRATED CIRCUIT TECHNOLOGY (ICSICT)(2018)

引用 7|浏览1
暂无评分
摘要
With an explosive growth of various neural network algorithms, their high performance implementations on hardware platforms, such as GPUs and FPGAs, are becoming critical as well. Compared to widely used GPUs, FPGAs are considered to be harder for design and optimization even with the help of High Level Synthesis (HLS) tools. However, recent studies have shown that FPGAs can outperform GPUs in speed and power/energy efficiency; both factors are important in machine learning applications. In this paper, we will discuss a simultaneous DNN and hardware accelerator co-design method to push the DNN performance on FPGAs. We first summarize existing techniques and results along this direction, and then propose new ideas to further improve DNN development productivity and design quality. Finally we discuss the challenges we would face and propose some potential solutions.
更多
查看译文
关键词
optimization,High Level Synthesis tools,hardware accelerator co-design method,neural network model,FPGA accelerator co-design,neural network algorithms,FPGA,GPU,machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要