RRAM-based Floating-Point In-Memory-Computing Architecture for High Throughput Signal Processing

2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP)(2019)

引用 0|浏览1
暂无评分
摘要
In recent years, the demand for high throughput signal processing is increasing very fast. Traditional von Neumann processors are unable to handle high throughput data efficiently because of the well-known memory wall and power wall challenges. As an emerging technology, in-memory-computing has become a hot spot because it can alleviate the burden of power wall and memory wall at the same time, suitable for performing efficient operations on high throughput signals. The existing work on in-memory-computing mainly targets at artificial neural networks acceleration, with an implementation of low precision fixed-point operations, because neural networks can tolerate low-precision calculations to some extent. However, in the field of high throughput signal processing, low precision operations are insufficient, and it needs floating-point high precision operations. Therefore this paper proposed a floating-point in-memory-computing architecture based on Resistive Random Access Memory (RRAM) for high throughput signal processing. The architecture has the advantages of both precision and performance. The simulation results show that the throughput performance is 0.819 Gflops with 2 compute units. Each of the computing units is a 128×128 memory array. The energy efficiency is 3.19 Tflops/W. Apart from efficient high throughput signal processing, it can be promoted to the high-performance, high-precision general scientific computing field.
更多
查看译文
关键词
in-memory-computing,high-precision computing,high throughput computing,RRAM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要