FBNA: A Fully Binarized Neural Network Accelerator

2018 28th International Conference on Field Programmable Logic and Applications (FPL)(2018)

引用 76|浏览16
暂无评分
摘要
In recent researches, binarized neural network (BNN) has been proposed to address the massive computations and large memory footprint problem of the convolutional neural network (CNN). Several works have designed specific BNN accelerators and showed very promising results. Nevertheless, only part of the neural network is binarized in their architecture and the benefits of binary operations were not fully exploited. In this work, we propose the first fully binarized convolutional neural network accelerator (FBNA) architecture, in which all convolutional operations are binarized and unified, even including the first layer and padding. The fully unified architecture provides more resource, parallelism and scalability optimization opportunities. Compared with the state-of-the-art BNN accelerator, our evaluation results show 3.1x performance, 5.4x resource efficiency and 4.9x power efficiency on CIFAR-10.
更多
查看译文
关键词
BNN,Accelerator,FPGA,CNN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要