CIMAT: A Compute-In-Memory Architecture for On-chip Training Based on Transpose SRAM Arrays

IEEE Transactions on Computers(2020)

引用 47|浏览31
暂无评分
摘要
Rapid development in deep neural networks (DNNs) is enabling many intelligent applications. However, on-chip training of DNNs is challenging due to the extensive computation and memory bandwidth requirements. To solve the bottleneck of the memory wall problem, compute-in-memory (CIM) approach exploits the analog computation along the bit line of the memory array thus significantly speeds up the ve...
更多
查看译文
关键词
Training,Random access memory,Computer architecture,System-on-chip,Pipelines,Common Information Model (computing),Energy efficiency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要