Time-Predictable L2 Cache Design for High-Performance Real-Time Systems

Embedded and Real-Time Computing Systems and Applications(2010)

引用 4|浏览2
暂无评分
摘要
Unified L2 caches can lead to runtime interferences between instructions and data, making it very hard, if not impossible, to perform timing analysis for real-time systems. This paper proposes a priority cache to achieve both time predictability and high performance for real-time systems. The priority cache allows both the instruction and data streams to share the aggregate L2 cache; however, instructions and data cannot replace each other to enable independent instruction cache and data cache timing analyses. Our performance evaluation shows that the instruction priority cache outperforms separate L2 caches, both of which can achieve time predictability. On average, the number of execution cycles of the instruction priority cache is only 1.1% more than that of a unified L2 cache.
更多
查看译文
关键词
high performance,high-performance real-time systems,data cache,cache storage,time-predictable l2 cache design,wcet analysis,l2 cache,priority cache,real-time system,data cache timing analysis,unified cache,cache timing analysis,l2 cache design,performance evaluation,real-time computing,instruction priority cache,time predictability,real-time systems,data stream,independent instruction cache,registers,vliw,real time systems,timing analysis,real time computing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要