Cappuccino: Efficient CNN Inference Software Synthesis for Mobile System-on-Chips

IEEE Embedded Systems Letters(2019)

引用 18|浏览25
暂无评分
摘要
Convolutional neural networks (CNNs) exhibit remarkable performance in various machine learning tasks. As sensor-equipped Internet of Things devices permeate into every aspect of modern life, the ability to execute CNN inference, a computationally intensive application, on resource constrained devices has become increasingly important. In this context, we present Cappuccino, a framework for synthesis of efficient inference software targeting mobile system-on-chips (SoCs). We propose techniques for efficient parallelization of CNN inference targeting mobile SoCs, and explore the underlying tradeoffs. Experiments with different CNNs on three mobile devices demonstrate the effectiveness of our approach.
更多
查看译文
关键词
Instruction sets,Kernel,Parallel processing,Convolution,Data models,Optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要