R2CNN - Recurrent Residual Convolutional Neural Network on FPGA.

FPGA(2020)

引用 1|浏览34
暂无评分
摘要
Over the past years, feed-forward convolutional neural networks (CNNs) have evolved from a simple feed-forward architecture to deep and residual (skip-connection) architectures, demonstrating increasingly higher object categorization accuracy and increasingly better explanatory power of both neural and behavioral responses. However, from the neuroscientist point of view, the relationship between such deep architectures and the ventral visual pathway is incomplete. For example, current state-of-the-art CNNs appear to be too complex (e.g., now over 100 layers for ResNet) compared with the relatively shallow cortical hierarchy (4-8 layers). We introduce new CNNs with shallow recurrent architectures and skip connections requiring fewer parameters. With higher accuracy for classification, we propose an architecture for recurrent residual convolutional neural network (R2CNN) on FPGA, which efficiently utilizes on-chip memory bandwidth. We propose an Output-Kernel- Input-Parallel (OKIP) convolution circuit for a recurrent residual convolution stage. We implement the inference hardware on a Xilinx ZCU104 evaluation board with high-level synthesis. Our R2CNN accelerator achieves top-5 accuracy of 90.08% on ImageNet bench- mark, which has higher accuracy than conventional FPGA implementations.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要