High-speed BNN Design in HLS with Optimized Classification and Computation Method

2022 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia)(2022)

引用 1|浏览0
暂无评分
摘要
With the development of computer vision and artificial intelligence, many neural networks are being studied. Also, FPGAs are being used in many fields in modern society by its versatility. While FPGA resources are limited, neural networks are getting bigger, and the amount of computation is also increasing. Therefore, even if the accuracy is slightly lowered, more focus is placed on the amount of H/W resources and speed. In this paper, the latency is reduced by 29% compared to the conventional design by using the pipeline and loop flatten to optimize the operation. Also, while using the H/W resources available in the FPGA, the power and accuracy are kept similar to those of the conventional BNN.
更多
查看译文
关键词
Computer vision,Neural Network,Convolution Neural Network (CNN),Binary Neural Network (BNN),FPGA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要