A Mixed Precision, Multi-GPU Design for Large-scale Top-K Sparse Eigenproblems.

ISCAS(2022)

引用 3|浏览6
暂无评分
摘要
Graph analytics techniques based on spectral methods process extremely large sparse matrices with millions or even billions of non-zero values. Behind these algorithms lies the Top-K sparse eigenproblem, the computation of the largest eigenvalues and their associated eigenvectors. In this work, we leverage GPUs to scale the Top-K sparse eigenproblem to bigger matrices than previously achieved while also providing state-of-the-art execution times. We can transparently partition the computation across multiple GPUs, process out-of-core matrices, and tune precision and execution time using mixed-precision floating-point arithmetic. Overall, we are 67 times faster than the highly optimized ARPACK library running on a 104-thread CPU and 1.9 times than a recent FPGA hardware design. We also determine how mixed-precision floating-point arithmetic improves execution time by 50% over double-precision, and is 12 times more accurate than single-precision floating-point arithmetic.
更多
查看译文
关键词
execution time,double-precision,single-precision floating-point arithmetic,mixed precision,multiGPU design,Top-K sparse eigenproblems,graph analytics techniques,spectral methods,sparse matrices,nonzero values,largest eigenvalues,associated eigenvectors,leverage GPUs,bigger matrices,state-of-the-art execution times,multiple GPUs,process out-of-core matrices,mixed-precision floating-point arithmetic,highly optimized ARPACK library,recent FPGA hardware design
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要