Accelerating CPU to Memory Access in SoC Architecture Design

Hongbin Wan,Yi Zou, Guohua Wen, Junfeng Hu

SoutheastCon 2024(2024)

引用 0|浏览0
暂无评分
摘要
The evolution and revolution of technologies in artificial intelligence, internet of things, and 5G communication mark the dawn of a new fully digitalized era. Correspondingly, the demand of computation capabilities on artificial intelligence of things (AIoT) is growing exponentially. Meanwhile, with CPU being the main control in typical system on chip (SoC) design, the accessing latency from CPU to memory stands out as one key factor that directly impacts the overall performance. In this paper, we propose a design to reduce such memory access overhead to achieve considerable acceleration. We first provide detailed analysis on memory access paths from ARM CPUs typically seen in SoC. We next introduce an accelerated memory access design through a smart direct passthrough memory access approach, allowing the SoC system architecture to achieve the lowest latency from CPU to memory, without sacrificing the balance on the bandwidth demands from other devices. Based on our preliminary evaluation using standard benchmarks, the proposed approach outperforms the traditional SoC architectures and reduces the latency to as much as 15%.
更多
查看译文
关键词
System-on-Chip,CPU,Memory Access,Latency,Network-on-Chip
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要