Reram-Based Accelerator For Deep Learning

PROCEEDINGS OF THE 2018 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE)(2018)

引用 36|浏览33
暂无评分
摘要
Big data computing applications such as deep learning and graph analytic usually incur a large amount of data movements. Deploying such applications on conventional von Neumann architecture that separates the processing units and memory components likely leads to performance bottleneck due to the limited memory bandwidth. A common approach is to develop architecture and memory co-design methodologies to overcome the challenge. Our research follows the same strategy by leveraging resistive memory (ReRAM) to further enhance the performance and energy efficiency. Specifically, we employ the general principles behind processing-in-memory to design efficient ReRAM based accelerators that support both testing and training operations. Related circuit and architecture optimization will be discussed too.
更多
查看译文
关键词
memory components,memory co-design methodologies,data movements,graph analytic,big data computing applications,deep learning,architecture optimization,accelerator,efficient ReRAM,processing-in-memory,energy efficiency,resistive memory,memory bandwidth,performance bottleneck,processing units,conventional von Neumann architecture
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要