Victima: Drastically Increasing Address Translation Reach by Leveraging Underutilized Cache Resources

56TH IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE, MICRO 2023(2023)

引用 0|浏览16
暂无评分
摘要
Address translation is a performance bottleneck in data-intensive workloads due to large datasets and irregular access patterns that lead to frequent high-latency page table walks (PTWs). PTWs can be reduced by using (i) large hardware TLBs or (ii) large softwaremanaged TLBs. Unfortunately, both solutions have significant drawbacks: increased access latency, power and area (for hardware TLBs), and costly memory accesses, the need for large contiguous memory blocks, and complex OS modifications (for software-managed TLBs). We present Victima, a new software-transparent mechanism that drastically increases the translation reach of the processor by leveraging the underutilized resources of the cache hierarchy. The key idea of Victima is to repurpose L2 cache blocks to store clusters of TLB entries, thereby providing an additional low-latency and high-capacity component that backs up the last-level TLB and thus reduces PTWs. Victimahas two main components. First, a PTWcost predictor (PTW-CP) identifies costly-to-translate addresses based on the frequency and cost of the PTWs they lead to. Leveraging the PTW-CP, Victima uses the valuable cache space only for TLB entries that correspond to costly-to-translate pages, reducing the impact on cached application data. Second, a TLB-aware cache replacement policy prioritizes keeping TLB entries in the cache hierarchy by considering (i) the translation pressure (e.g., last-level TLB miss rate) and (ii) the reuse characteristics of the TLB entries. Our evaluation results show that in native (virtualized) execution environments Victima improves average end-to-end application performance by 7.4% (28.7%) over the baseline four-level radix-treebased page table design and by 6.2% (20.1%) over a state-of-the-art software-managed TLB, across 11 diverse data-intensive workloads. Victima delivers similar performance as a system that employs an optimistic 128K-entry L2 TLB, while avoiding the associated area and power overheads. Victima (i) is effective in both native and virtualized environments, (ii) is completely transparent to application and system software, (iii) unlike large software-managed TLBs, does not require contiguous physical allocations, (iv) is compatible with modern large page mechanisms and (iv) incurs very small area and power overheads of 0.04% and 0.08%, respectively, on a modern high-end CPU. The source code of Victima is freely available at https://github.com/CMU-SAFARI/Victima.
更多
查看译文
关键词
Virtual Memory,Cache,TLB,Virtualization,Microarchitecture,Address Translation,Memory Hierarchy,Memory Systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要