7.3 A 28nm 38-to-102-TOPS/W 8b Multiply-Less Approximate Digital SRAM Compute-In-Memory Macro for Neural-Network Inference

2023 IEEE International Solid- State Circuits Conference (ISSCC)(2023)

引用 0|浏览20
暂无评分
摘要
This paper presents a 2-to-8-b scalable digital SRAM-based CIM macro that is co-designed with a multiply-less neural-network (NN) design methodology and incorporates dynamic-logic-based approximate circuits for vector-vector operations. Digital CIMs enable high throughput and reliable matrix-vector multiplications (MVMs); however, digital CIMs face three major challenges to obtain further aggressive gains over conventional digital architectures: (1) prior digital CIMs exploiting approximate computation suffer from accuracy degradation [1]; (2) digital [2] and, as [3] predicted, mixed-signal CIMs [4], suffer from quadratic energy scaling with improving operand precision; (3) the tight and regular memory layout prevent s CIMs from leveraging unstructured bit-level statistics.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要