Optimising convolutional neural networks for super fast inference on focal-plane sensor-processor arrays

semanticscholar(2019)

引用 0|浏览0
暂无评分
摘要
Convolutional Neural Networks (CNNs) have revolutionised the Computer Vision discipline in the last few years. CNNs now are state of the art methods to solve almost all classification, segmentation, or detection tasks. In parallel, domain specific architectures have been developed for Computer Vision applications, and amongst others, a new form of hardware has emerged: Focal Plane Sensor Processors (FPSPs). FPSPs consist in merging the light sensor and the the processing unit of a traditional vision system, by enabling each photo-diode with rudimentary analog computation capabilities. In this work, we implement CNNs on an FPSP, a goal previously pursued only twice to the best of our knowledge [49] [5]. To benefit from the low latency and energy efficiency of existing FPSPs, the main challenge is the limited register availability and the inaccurate nature of their computations. An in-depth FPSP-specific optimisation of all components constituting a CNN allows us to beat the previous baseline by a margin of more than 4%. Our AnalogNet2 architecture reaches a testing accuracy of 96.9% on the MNIST dataset, at a speed of 2260 FPS, all for a cost of 0.7 mJ per frame. We also experiment two techniques to implement multi-layer CNNs on an FPSP quantisation and pooling. The resulting accuracy is however gravely hindered by noise, for which we provide a quantitative study. Finally, we prove the impact of this work on a real application, with a proof-of-concept that extracts steering directions from a scene, for a wheeled robot platform.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要