A GPU-based associative memory using sparse Neural Networks

HPCS(2014)

引用 7|浏览10
暂无评分
摘要
Associative memories, serving as building blocks for a variety of algorithms, store content in such a way that it can be later retrieved by probing the memory with a small portion of it, rather than with an address as in more traditional memories. Recently, Gripon and Berrou have introduced a novel construction which builds on ideas from the theory of error correcting codes, greatly outperforming the celebrated Hopfield Neural Networks in terms of the number of stored messages per neuron and the number of stored bits per synapse. The work of Gripon and Berrou proposes two retrieval rules, SUM-OF-SUM and SUM-OF-MAX. In this paper, we implement both rules on a general purpose graphical processing unit (GPU). SUM-OF-SUM uses only matrix-vector multiplication and is easily implemented on the GPU, whereas SUM-OF-MAX, which involves non-linear operations, is much less straightforward to fulfill. However, SUM-OF-MAX gives significantly better retrieval error rates. We propose a hybrid scheme tailored for implementation on a GPU which achieves a 880-fold speedup without sacrificing any accuracy.
更多
查看译文
关键词
building blocks,matrix multiplication,sum-of-sum rule,recurrent neural networks,parallel processing,hopfield neural nets,retrieval error rate,general purpose graphical processing unit,traditional memory,stored bits per synapse,sparse neural networks,graphics processing units,nonlinear operation,cuda,associative memory,error correction codes,sum-of-max rule,matrix-vector multiplication,gpgpu,hopfield neural networks,error correcting codes,content-addressable storage,sparse coding,stored messages per neuron,gpu-based associative memory,matrix vector multiplication,neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要