FeFET versus DRAM based PIM Architectures: A Comparative Study

2022 IFIP/IEEE 30th International Conference on Very Large Scale Integration (VLSI-SoC)(2022)

引用 0|浏览16
暂无评分
摘要
The throughput and energy efficiency of compute-centric architectures for memory intensive Deep Neural Networks (DNN) applications are limited by memory bound issues like high data-access energy, long latencies, and limited bandwidth. Processing-in-Memory (PIM) is a very promising approach to address these challenges and bridge the memory-computation gap. PIM places computational logic inside the memory to exploit minimum data movement and massive internal data parallelism. There are currently two PIM trends: 1) Use of emerging non-volatile memories to perform highly parallel analog computation of MAC operations and implicit storage of weights within the memory arrays, and 2) exploiting mature memory technologies that are enhanced by additional logic to enable efficient computation of MAC operations near the memory arrays. In this paper, we will compare both trends from an architectural perspective. Our study mainly emphasizes on FeFET memories (an emerging memory candidate) and DRAM memories (a mature memory candidate). We will highlight the major architectural constraints of these memory candidates that impact the PIM designs and their overall performance. Finally, we will assess feasible choice of candidate for different computations or DNN task types.
更多
查看译文
关键词
Processing in Memory,DRAM,FeFET,DNN
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要