FPGA-Based HPC for Associative Memory System

2024 29th Asia and South Pacific Design Automation Conference (ASP-DAC)(2024)

引用 0|浏览2
暂无评分
摘要
Associative memory plays a crucial role in the cognitive capabilities of the human brain. The Bayesian Confidence Propagation Neural Network (BCPNN) is a cortex model capable of emulating brain-like cognitive capabilities, particularly associative memory. However, the existing GPU-based approach for BCPNN simulations faces challenges in terms of time overhead and power efficiency. In this paper, we propose a novel FPGA-based high performance computing (HPC) design for the BCPNN-based associative memory system. Our design endeavors to maximize the spatial and timing utilization of FPGA while adhering to the constraints of the available hardware resources. By incorporating optimization techniques including shared parallel computing units, hybrid-precision computing for a hybrid update mechanism, and the globally asynchronous and locally synchronous (GALS) strategy, we achieve a maximum network size of $150 \times 10$ and a peak working frequency of 100 MHz for the BCPNN-based associative memory system on the Xilinx Alveo U200 Card. The tradeoff between performance and hardware overhead of the design is explored and evaluated. Compared with the GPU counterpart, the FPGA-based implementation demonstrates significant improvements in both performance and energy efficiency, achieving a maximum latency reduction of $33.25 \times$, and a power reduction of over $6.9 \times$, all while maintaining the same network configuration.
更多
查看译文
关键词
High-performance Computing,Associative Memory System,Neural Network,Network Size,Network Configuration,Computing Units,Working Frequency,Hardware Resources,Bayesian Neural Network,Synchronization Scheme,Hardware Overhead,Higher Frequency,Artificial Neural Network,Power Consumption,Weight Matrix,Control Signal,Learning Phase,State Machine,Learning Rule,Spiking Neural Networks,Spike-timing-dependent Plasticity,Inference Phase,Current Time Step,Datapath,Simulation Time Step,Synaptic Weights,Cognitive Computing,Spike Firing,High-performance Computing Systems,Hardware Architecture
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要