Enterprise-Class Cache Compression Design.

Alper Buyuktosunoglu,David Trilla,Bülent Abali, Deanna Postles Dunn Berger,Craig R. Walters, Jang-Soo Lee

International Symposium on High-Performance Computer Architecture(2024)

引用 0|浏览0
暂无评分
摘要
Larger cache sizes closer to processor cores increase processing efficiency, but physical limitations restrict cache sizes at a given latency. Effective cache capacity can be expanded via the inline compression of data as it enters a lower level cache. Using the IBM Telum ® processor cache hierarchy as a comparative baseline, this paper presents a custom compression scheme designed for small, line-sized data blocks, examines op-timal compressor/decompressor placement, solutions to common compression drawbacks, and proposes a tiered design blueprint to facilitate product integration. The impact of compression and prediction-assisted adaptive compression on effective cache capacity, hit rate and access latency across several typical industry workloads is explored.
更多
查看译文
关键词
Source Code,Core Processes,Compression Scheme,Cache Size,Access Latency,Cache Hit,Performance Gain,Low Latency,Compressor,Design Points,Lineaments,Compression Ratio,Side-channel,Caching,Uncompressed,Most Significant Bit,Design Options,Transaction Processing,Chip Size,Baseline Design,L2 Cache,Replacement Policy,Line L2,Cupcake,Compression Quality,Storage Operations,Zero Line,Processing Architecture,Scrubber,Performance Trends
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要