Cache Management with Partitioning-Aware Eviction and Thread-Aware Insertion/Promotion Policy

Junmin Wu,Xiufeng Sui,Yixuan Tang,Xiaodong Zhu, Jing Wang, Guoliang Chen

ISPA '10 Proceedings of the International Symposium on Parallel and Distributed Processing with Applications(2010)

引用 4|浏览0
暂无评分
摘要
With recent advances of processor technology, the LRU based shared last-level cache (LLC) has been widely employed in modern Chip Multi-processors (CMP). However, past research [1,2,8,9] indicates that the cache performance of the LLC and further of the CMP processors may be degraded severely by LRU under the occurrence of the inter-thread interference or the excess of the working set size over the cache size. Existing approaches tackling this performance degradation problem have limited improvement of an overall cache performance because they usually focus on a single type of memory access behavior and thus lack full consideration of tradeoffs among different types of memory access behaviors. In this paper, we propose a unified cache management policy called Partitioning-Aware Eviction and Thread-aware Insertion/Promotion policy (PAE-TIP) that can effectively enhance capacity management, adaptive insertion/promotion, and further improve the overall cache performance. Specifically, PAE-TIP employs an adaptive mechanism to decide the position where to put the incoming lines or to move the hit lines, and chooses a victim line based on the target partitioning given by utility-based cache partitioning (UCP) [2]. In our study, we show that PAE-TIP can cover a variety of memory access behaviors simultaneously and provide a good tradeoff for overall cache performance improvement while retaining competitively low hardware and design overhead. The evaluation conducted on 4-way CMPs shows that the PAE-TIP-managed LLC can improve overall performance by19.3% on average over the LRU policy. Furthermore, the performance benefit of PAE-TIP is 1.09x compared to PIPP, 1.11x compared to TADIP and 1.12x compared to UCP.
更多
查看译文
关键词
last-level cache,overall performance by19,overall cache performance improvement,thread-aware insertion,memory access behavior,overall cache performance,utility-based cache partitioning,performance benefit,cache size,unified cache management policy,promotion policy,cache management,cache performance,partitioning-aware eviction,shared cache,art,capacity management,throughput,benchmark testing,promotion,instruction sets,measurement,insertion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要