LP-BNN: Ultra-low-Latency BNN Inference with Layer Parallelism

2019 IEEE 30th International Conference on Application-specific Systems, Architectures and Processors (ASAP)(2019)

引用 44|浏览324
暂无评分
摘要
High inference latency seriously limits the deployment of DNNs in real-time domains such as autonomous driving, robotic control, and many others. To address this emerging challenge, researchers have proposed approximate DNNs with reduced precision, e.g., Binarized Neural Networks (BNNs). While BNNs can be built to have little loss in accuracy, latency reduction still has much room for improvement. In this paper, we propose a single-FPGA-based BNN accelerator that achieves microsecond-level ultra-low-latency inference of ImageNet, LP-BNN. We obtain this performance via several design optimizations. First, we optimize the network structure by removing Batch Normalization (BN) functions which leads to significant latency in BNNs without any loss on accuracy. Second, we propose a parameterized architecture which is based on layer parallelism and supports nearly perfect load balancing for various types of BNNs. Third, we fuse all the convolution layers and the first fully connected layer. We process them in parallel through fine-grained inter-layer pipelining. With our proposed accelerator, the inference of binarized AlexNet, VGGNet, and ResNet are completed within 21.5us, 335us, and 67.8us respectively, with no loss in accuracy as compared with other BNN implementations.
更多
查看译文
关键词
BNN,FPGA,Deep Learning,Ultra Low-Latency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要