Fpga/Dnn Co-Design: An Efficient Design Methodology For Iot Intelligence On The Edge

PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC)(2019)

引用 193|浏览477
暂无评分
摘要
While embedded FPGAs are attractive platforms for DNN acceleration on edge-devices due to their low latency and high energy efficiency, the scarcity of resources of edge-scale FPGA devices also makes it challenging for DNN deployment. In this paper, we propose a simultaneous FPGA/DNN co-design methodology with both bottom-up and top-down approaches: a bottom-up hardware-oriented DNN model search for high accuracy, and a top-down FPGA accelerator design considering DNN-specific characteristics. We also build an automatic co-design flow, including an Auto-DNN engine to perform hardware-oriented DNN model search, as well as an Auto-HLS engine to generate synthesizable C code of the FPGA accelerator for explored DNNs. We demonstrate our co-design approach on an object detection task using PYNQ-Z1 FPGA. Results show that our proposed DNN model and accelerator outperform the state-of-the-art FPGA designs in all aspects including Intersection-over-Union (IoU) (6.2% higher), frames per second (FPS) (2.48x higher), power consumption (40% lower), and energy efficiency (2.5x higher). Compared to GPU-based solutions, our designs deliver similar accuracy but consume far less energy.
更多
查看译文
关键词
hardware-oriented DNN model search,FPGA accelerator design,DNN-specific characteristics,automatic co-design flow,Auto-DNN engine,Auto-HLS engine,PYNQ-ZI FPGA,DNN acceleration,edge-devices,edge-scale FPGA devices,IoT intelligence,FPGA/DNN codesign,Co,C
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要