Structure optimizations of neuromorphic computing architectures for deep neural network

2018 Design, Automation & Test in Europe Conference & Exhibition (DATE)(2018)

引用 7|浏览16
暂无评分
摘要
This work addresses a new structure optimization of neuromorphic computing architectures. This enables to speed up the DNN (deep neural network) computation twice as fast as, theoretically, that of the existing architectures. Precisely, we propose a new structural technique of mixing both of the dendritic and axonal based neuromorphic cores in a way to totally eliminate the inherent non-zero waiting time between cores in the DNN implementation. In addition, in conjunction with the new architecture we propose a technique of maximally utilizing computation units so that the resource overhead of total computation units can be minimized. We have provided a set of experimental data to demonstrate the effectiveness (i.e., speed and area) of our proposed architectural optimizations: ~2× speedup with no accuracy penalty on the neuromorphic computation or improved accuracy with no additional computation time.
更多
查看译文
关键词
structure optimization,neuromorphic computing architectures,deep neural network,DNN computation,structural technique,axonal based neuromorphic cores,nonzero waiting time,total computation units,architectural optimizations,neuromorphic computation,dendritic based neuromorphic core,axonal based neuromorphic core
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要