Benchmarking GPUs to tune dense linear algebra

Vasily Volkov, James W. Demmel

SC(2008)

引用 1150|浏览380
暂无评分
摘要
We present performance results for dense linear algebra using recent NVIDIA GPUs. Our matrix-matrix multiply routine (GEMM) runs up to 60% faster than the vendor's implementation and approaches the peak of hardware capabilities. Our LU, QR and Cholesky factorizations achieve up to 80-90% of the peak GEMM rate. Our parallel LU running on two GPUs achieves up to ~540 Gflop/s. These results are accomplished by challenging the accepted view of the GPU architecture and programming guidelines. We argue that modern GPUs should be viewed as multithreaded multicore vector units. We exploit blocking similarly to vector computers and heterogeneity of the system by computing both on GPU and CPU. This study includes detailed benchmarking of the GPU memory system that reveals sizes and latencies of caches and TLB. We present a couple of algorithmic optimizations aimed at increasing parallelism and regularity in the problem that provide us with slightly higher performance.
更多
查看译文
关键词
modern gpus,gpu memory system,cholesky factorizations,matrix-matrix multiply routine,gpu architecture,recent nvidia gpus,tune dense linear algebra,present performance result,vector computer,multiprocessing systems,dense linear algebra,parallel lu,benchmark testing,nvidia gpu,multithreaded multicore vector units,gpu benchmarking,coprocessors,peak gemm rate,cpu,benchmarking gpus,multithreaded multicore vector unit,higher performance,pipelines,memory management,kernel,registers,cholesky factorization,bandwidth,throughput
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要