A 12nm 121-TOPS/W 41.6-TOPS/mm2 All Digital Full Precision SRAM-based Compute-in-Memory with Configurable Bit-width For AI Edge Applications.

2022 IEEE Symposium on VLSI Technology and Circuits (VLSI Technology and Circuits)(2022)

引用 12|浏览12
暂无评分
摘要
Recently SRAM-based digital compute-in memory (D-CIM) [1] has demonstrated excellent energy/area efficiency, with full precision of 4b/8b integer multiply-accumulate operations, it has better programmability, hardware reuse and scalability, in addition, it can effectively leverage technology scaling for better PPA. Nonetheless, several new challenges remain, including huge peak currents resulting from high parallel operation, long delays in adder trees, and scalable architectures that support various neural network topologies. In this paper, we detail proposed solutions to address the new challenges and present measurement results for a SRAM-based 64x64 CIM manufactured by 12nm CMOS process.
更多
查看译文
关键词
12nm CMOS process,41.6-TOPS,configurable bit-width,AI edge applications,SRAM-based digital compute-in memory,hardware reuse,leverage technology,huge peak currents,high parallel operation,scalable architectures,support various neural network topologies,SRAM-based 64x64 CIM,size 12.0 nm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要