Cawa: Coordinated Warp Scheduling And Cache Prioritization For Critical Warp Acceleration Of Gpgpu Workloads

2015 ACM/IEEE 42ND ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA)(2015)

引用 66|浏览1
暂无评分
摘要
The ubiquity of graphics processing unit (GPU) architectures has made them efficient alternatives to chip multiprocessors for parallel workloads, GPUs achieve superior performance by making use of massive multi-threading and fast context-switching to hide pipeline stalls and memory access latency. However; recent characterization results have shown that general purpose GPU (GPGPU) applications commonly encounter long stall latencies that cannot be easily hidden with the large number of concurrent threads/warps, This results in varying execution time disparity between different parallel warps, hurting the overall performance of GPUs - the warp criticality problem.To tackle the warp criticality problem, we propose a coordinated solution, criticality-aware warp acceleration (CAWA), that efficiently manages compute and memory resources to accelerate the critical warp execution. Specifically, we design (1) an instruction-based and stall-based criticality predictor to identify the critical warp in a thread-block, (2) a criticality aware warp scheduler that preferentially allocates more time resources to the critical warp, and (3) a criticality-aware cache reuse predictor that assists critical warp acceleration by retaining latency-critical and useful cache blocks in the LI data cache. CAWA targets to remove the significant execution time disparity in order to improve resource utilization for GPGPU workloads. Our evaluation results show that, under the proposed coordinated scheduler and cache prioritization management scheme, the performance of the GPGPU workloads can be improved by 23% while other state-of-the-art schedulers, GTO and 2-level schedulers, improve performance by 16% and-2% respectively.
更多
查看译文
关键词
warp scheduling,cache prioritization,criticality-aware warp acceleration,CAWA,GPGPU workload,general purpose graphics processing unit,chip-multiprocessor,CMP,multithreading,resource utilization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要