Memory-Efficient Dataflow Inference for Deep CNNs on FPGA

2020 International Conference on Field-Programmable Technology (ICFPT)(2020)

引用 14|浏览20
暂无评分
摘要
Custom dataflow Convolutional Neural Network (CNN) inference accelerators on FPGA are tailored to a specific CNN topology and store parameters in On-Chip Memory (OCM), creating the potential for high energy efficiency and low inference latency. However, in these accelerators the shapes of parameter memories are dictated by throughput constraints and do not map well to the underlying OCM, which bec...
更多
查看译文
关键词
Shape,Memory management,Random access memory,Throughput,Topology,System-on-chip,Timing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要