$y=A\times x$ "/>

b8c: SpMV accelerator implementation leveraging high memory bandwidth

2023 IEEE 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES, FCCM(2023)

引用 0|浏览13
暂无评分
摘要
Sparse Matrix-Vector multiplication (SpMV), computing $y=A\times x$ where $y, x$ are dense vectors and $A$ is a sparse matrix, is a key kernel in many HPC applications. Vitis Sparse Library's double precision SpMV (VSpMV) [1] is, to the best of our knowledge, the only performance-oriented, double-precision (64-bit) floating point implementation of SpMV on FPGAs equipped with High Bandwidth Memory (HBM).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要