FPGA Optimized Architecture of XNOR-POPCOUNT

2023 2nd International Conference on Computing, Communication, Perception and Quantum Technology (CCPQT)(2023)

引用 0|浏览2
暂无评分
摘要
Binarized neural network (BNN) is a neural network whose input activations and weights are quantified to 1-bit. It has very low parameters compared with other neural networks. What’s more, the multiplication and accumulation calculation of the convolutional layer and the fully connected layer of BNN can be replaced by XNOR-POPCOUNT calculation. FPGA has become a major hardware deployment platform for BNN due to its programmability and parallelism. However, the basic logic cell of the FPGA cannot directly complete the XNOR-POPCOUNT calculation. This paper proposes an architecture that saves numerous look-up tables (LUT), and it improves the efficiency of the logic cell by allowing the LUTs to complete more logical operations. Compared with other architectures, the proposed architecture can save 8%-32% of logic resources.
更多
查看译文
关键词
XNOR-POPCOVNT,FPGA,logic cell,LUT,BNN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要