PinT: Polynomial in Temperature Decode Weights in a Neuromorphic Architecture

Scott Reid, Antonio Montoya,Kwabena Boahen

2019 Design, Automation & Test in Europe Conference & Exhibition (DATE)(2019)

引用 2|浏览50
暂无评分
摘要
We present Polynomial in Temperature (PinT) decode weights, a novel approach to approximating functions with an ensemble of silicon neurons that increases thermal robustness. In mixed-signal neuromorphics, computing accurately across a wide range of temperatures is challenging because of individual silicon neurons' thermal sensitivity. To compensate for the resulting changes in the neuron's tuning-curves in the PinT framework, weights change continuously as a polynomial function of temperature. We validate PinT across a 38°C range by applying it to tuning curves measured for ensembles of 64 to 1936 neurons on Braindrop, a mixed-signal neuromorphic chip fabricated in 28-nm FDSOI CMOS. LinT, the Linear in Temperature version of PinT, reduces error by a small margin on test data, relative to an ensemble with temperature-independent weights. LinT and higher-order models show much greater promise on training data, suggesting that performance can be further improved. When implemented on-chip, LinT's performance is very similar to the performance with temperature-independent decode weights. SpLinT and SpLSAT, the Sparse variants of LinT and LSAT, are promising avenues for efficiently reducing error. In the SpLSAT model, up to 90% of neurons on chip can be deactivated while maintaining the same function-approximation error.
更多
查看译文
关键词
mixed-signal neuromorphics,thermal robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要